2025-03-23 12:35:20.703281 | Job console starting... 2025-03-23 12:35:20.716000 | Updating repositories 2025-03-23 12:35:20.766103 | Preparing job workspace 2025-03-23 12:35:22.405783 | Running Ansible setup... 2025-03-23 12:35:27.215812 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-23 12:35:27.956145 | 2025-03-23 12:35:27.956313 | PLAY [Base pre] 2025-03-23 12:35:27.987647 | 2025-03-23 12:35:27.987792 | TASK [Setup log path fact] 2025-03-23 12:35:28.025956 | orchestrator | ok 2025-03-23 12:35:28.045625 | 2025-03-23 12:35:28.045767 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-23 12:35:28.091141 | orchestrator | ok 2025-03-23 12:35:28.107635 | 2025-03-23 12:35:28.107751 | TASK [emit-job-header : Print job information] 2025-03-23 12:35:28.170710 | # Job Information 2025-03-23 12:35:28.170879 | Ansible Version: 2.15.3 2025-03-23 12:35:28.170912 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-03-23 12:35:28.170942 | Pipeline: post 2025-03-23 12:35:28.170963 | Executor: 7d211f194f6a 2025-03-23 12:35:28.170982 | Triggered by: https://github.com/osism/testbed/commit/63cbde76994a73e44616f52b56145a3e075339ad 2025-03-23 12:35:28.171001 | Event ID: 439c62fe-07e3-11f0-98db-a91033760734 2025-03-23 12:35:28.186521 | 2025-03-23 12:35:28.186647 | LOOP [emit-job-header : Print node information] 2025-03-23 12:35:28.334636 | orchestrator | ok: 2025-03-23 12:35:28.334895 | orchestrator | # Node Information 2025-03-23 12:35:28.334940 | orchestrator | Inventory Hostname: orchestrator 2025-03-23 12:35:28.334971 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-23 12:35:28.334999 | orchestrator | Username: zuul-testbed01 2025-03-23 12:35:28.335026 | orchestrator | Distro: Debian 12.10 2025-03-23 12:35:28.335055 | orchestrator | Provider: static-testbed 2025-03-23 12:35:28.335082 | orchestrator | Label: testbed-orchestrator 2025-03-23 12:35:28.335108 | orchestrator | Product Name: OpenStack Nova 2025-03-23 12:35:28.335133 | orchestrator | Interface IP: 81.163.193.140 2025-03-23 12:35:28.362036 | 2025-03-23 12:35:28.362175 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-23 12:35:28.850571 | orchestrator -> localhost | changed 2025-03-23 12:35:28.872555 | 2025-03-23 12:35:28.872958 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-23 12:35:29.944415 | orchestrator -> localhost | changed 2025-03-23 12:35:29.971288 | 2025-03-23 12:35:29.971524 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-23 12:35:30.263229 | orchestrator -> localhost | ok 2025-03-23 12:35:30.271557 | 2025-03-23 12:35:30.271679 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-23 12:35:30.315584 | orchestrator | ok 2025-03-23 12:35:30.333507 | orchestrator | included: /var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-23 12:35:30.342445 | 2025-03-23 12:35:30.342549 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-23 12:35:31.222372 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-23 12:35:31.222665 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/beeeea37af1a4630ae807f6527409ece_id_rsa 2025-03-23 12:35:31.222721 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/beeeea37af1a4630ae807f6527409ece_id_rsa.pub 2025-03-23 12:35:31.222760 | orchestrator -> localhost | The key fingerprint is: 2025-03-23 12:35:31.222798 | orchestrator -> localhost | SHA256:cLeVpJBi834Z01WZP0sbD9oCcF6a+ZbjSnq3MPecwoE zuul-build-sshkey 2025-03-23 12:35:31.222834 | orchestrator -> localhost | The key's randomart image is: 2025-03-23 12:35:31.222867 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-23 12:35:31.222900 | orchestrator -> localhost | | .. . .+| 2025-03-23 12:35:31.222932 | orchestrator -> localhost | | + o..o...o | 2025-03-23 12:35:31.222979 | orchestrator -> localhost | | ..+.+o*o. .| 2025-03-23 12:35:31.223013 | orchestrator -> localhost | | o..Oo. .+.| 2025-03-23 12:35:31.223045 | orchestrator -> localhost | | .S .B +..*| 2025-03-23 12:35:31.223076 | orchestrator -> localhost | | . E O .o.| 2025-03-23 12:35:31.223120 | orchestrator -> localhost | | .++.+ | 2025-03-23 12:35:31.223156 | orchestrator -> localhost | | o.+=o . | 2025-03-23 12:35:31.223189 | orchestrator -> localhost | | ...o.o+ | 2025-03-23 12:35:31.223221 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-23 12:35:31.223297 | orchestrator -> localhost | ok: Runtime: 0:00:00.364265 2025-03-23 12:35:31.235581 | 2025-03-23 12:35:31.235714 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-23 12:35:31.284322 | orchestrator | ok 2025-03-23 12:35:31.298500 | orchestrator | included: /var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-23 12:35:31.309150 | 2025-03-23 12:35:31.309247 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-23 12:35:31.333269 | orchestrator | skipping: Conditional result was False 2025-03-23 12:35:31.343634 | 2025-03-23 12:35:31.343737 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-23 12:35:31.892059 | orchestrator | changed 2025-03-23 12:35:31.903250 | 2025-03-23 12:35:31.903395 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-23 12:35:32.168872 | orchestrator | ok 2025-03-23 12:35:32.180786 | 2025-03-23 12:35:32.180908 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-23 12:35:32.635527 | orchestrator | ok 2025-03-23 12:35:32.643561 | 2025-03-23 12:35:32.643675 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-23 12:35:33.004660 | orchestrator | ok 2025-03-23 12:35:33.014473 | 2025-03-23 12:35:33.014596 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-23 12:35:33.040125 | orchestrator | skipping: Conditional result was False 2025-03-23 12:35:33.090363 | 2025-03-23 12:35:33.090572 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-23 12:35:33.524364 | orchestrator -> localhost | changed 2025-03-23 12:35:33.545174 | 2025-03-23 12:35:33.545298 | TASK [add-build-sshkey : Add back temp key] 2025-03-23 12:35:33.960116 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/beeeea37af1a4630ae807f6527409ece_id_rsa (zuul-build-sshkey) 2025-03-23 12:35:33.960370 | orchestrator -> localhost | ok: Runtime: 0:00:00.016882 2025-03-23 12:35:33.971821 | 2025-03-23 12:35:33.971950 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-23 12:35:34.332156 | orchestrator | ok 2025-03-23 12:35:34.340788 | 2025-03-23 12:35:34.340977 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-23 12:35:34.366232 | orchestrator | skipping: Conditional result was False 2025-03-23 12:35:34.393959 | 2025-03-23 12:35:34.394078 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-23 12:35:34.846788 | orchestrator | ok 2025-03-23 12:35:34.861533 | 2025-03-23 12:35:34.861656 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-23 12:35:34.893849 | orchestrator | ok 2025-03-23 12:35:34.901284 | 2025-03-23 12:35:34.901427 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-23 12:35:35.243824 | orchestrator -> localhost | ok 2025-03-23 12:35:35.274645 | 2025-03-23 12:35:35.275047 | TASK [validate-host : Collect information about the host] 2025-03-23 12:35:36.420546 | orchestrator | ok 2025-03-23 12:35:36.439106 | 2025-03-23 12:35:36.439223 | TASK [validate-host : Sanitize hostname] 2025-03-23 12:35:36.504129 | orchestrator | ok 2025-03-23 12:35:36.512241 | 2025-03-23 12:35:36.512428 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-23 12:35:37.076501 | orchestrator -> localhost | changed 2025-03-23 12:35:37.087839 | 2025-03-23 12:35:37.087958 | TASK [validate-host : Collect information about zuul worker] 2025-03-23 12:35:37.583119 | orchestrator | ok 2025-03-23 12:35:37.608152 | 2025-03-23 12:35:37.608814 | TASK [validate-host : Write out all zuul information for each host] 2025-03-23 12:35:38.172115 | orchestrator -> localhost | changed 2025-03-23 12:35:38.197289 | 2025-03-23 12:35:38.197454 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-23 12:35:38.474134 | orchestrator | ok 2025-03-23 12:35:38.482037 | 2025-03-23 12:35:38.482151 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-23 12:36:20.515467 | orchestrator | changed: 2025-03-23 12:36:20.515740 | orchestrator | .d..t...... src/ 2025-03-23 12:36:20.515792 | orchestrator | .d..t...... src/github.com/ 2025-03-23 12:36:20.515827 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-23 12:36:20.515858 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-23 12:36:20.515887 | orchestrator | RedHat.yml 2025-03-23 12:36:20.531581 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-23 12:36:20.531599 | orchestrator | RedHat.yml 2025-03-23 12:36:20.531650 | orchestrator | = 1.53.0"... 2025-03-23 12:36:36.045838 | orchestrator | 12:36:36.045 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-03-23 12:36:37.380496 | orchestrator | 12:36:37.380 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-23 12:36:38.376119 | orchestrator | 12:36:38.375 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-23 12:36:39.250085 | orchestrator | 12:36:39.249 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-23 12:36:41.303386 | orchestrator | 12:36:41.303 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-23 12:36:42.491612 | orchestrator | 12:36:42.491 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-23 12:36:43.454000 | orchestrator | 12:36:43.453 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-23 12:36:43.454145 | orchestrator | 12:36:43.453 STDOUT terraform: Providers are signed by their developers. 2025-03-23 12:36:43.454165 | orchestrator | 12:36:43.453 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-23 12:36:43.454205 | orchestrator | 12:36:43.453 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-23 12:36:43.454219 | orchestrator | 12:36:43.454 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-23 12:36:43.454231 | orchestrator | 12:36:43.454 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-23 12:36:43.454246 | orchestrator | 12:36:43.454 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-23 12:36:43.454305 | orchestrator | 12:36:43.454 STDOUT terraform: you run "tofu init" in the future. 2025-03-23 12:36:43.454318 | orchestrator | 12:36:43.454 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-23 12:36:43.454333 | orchestrator | 12:36:43.454 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-23 12:36:43.454344 | orchestrator | 12:36:43.454 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-23 12:36:43.454356 | orchestrator | 12:36:43.454 STDOUT terraform: should now work. 2025-03-23 12:36:43.454405 | orchestrator | 12:36:43.454 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-23 12:36:43.454451 | orchestrator | 12:36:43.454 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-23 12:36:43.454495 | orchestrator | 12:36:43.454 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-23 12:36:43.594921 | orchestrator | 12:36:43.594 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-03-23 12:36:43.770204 | orchestrator | 12:36:43.770 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-23 12:36:43.770280 | orchestrator | 12:36:43.770 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-23 12:36:43.770368 | orchestrator | 12:36:43.770 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-23 12:36:43.770403 | orchestrator | 12:36:43.770 STDOUT terraform: for this configuration. 2025-03-23 12:36:43.946730 | orchestrator | 12:36:43.944 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-03-23 12:36:44.038199 | orchestrator | 12:36:44.038 STDOUT terraform: ci.auto.tfvars 2025-03-23 12:36:44.205599 | orchestrator | 12:36:44.205 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-03-23 12:36:44.957037 | orchestrator | 12:36:44.956 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-23 12:36:45.482198 | orchestrator | 12:36:45.481 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-23 12:36:45.734092 | orchestrator | 12:36:45.733 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-23 12:36:45.734158 | orchestrator | 12:36:45.734 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-23 12:36:45.734203 | orchestrator | 12:36:45.734 STDOUT terraform:  + create 2025-03-23 12:36:45.734216 | orchestrator | 12:36:45.734 STDOUT terraform:  <= read (data resources) 2025-03-23 12:36:45.734222 | orchestrator | 12:36:45.734 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-23 12:36:45.734230 | orchestrator | 12:36:45.734 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-23 12:36:45.734237 | orchestrator | 12:36:45.734 STDOUT terraform:  # (config refers to values not yet known) 2025-03-23 12:36:45.734271 | orchestrator | 12:36:45.734 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-23 12:36:45.734303 | orchestrator | 12:36:45.734 STDOUT terraform:  + checksum = (known after apply) 2025-03-23 12:36:45.734335 | orchestrator | 12:36:45.734 STDOUT terraform:  + created_at = (known after apply) 2025-03-23 12:36:45.734366 | orchestrator | 12:36:45.734 STDOUT terraform:  + file = (known after apply) 2025-03-23 12:36:45.734398 | orchestrator | 12:36:45.734 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.734429 | orchestrator | 12:36:45.734 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.734460 | orchestrator | 12:36:45.734 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-23 12:36:45.734493 | orchestrator | 12:36:45.734 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-23 12:36:45.734519 | orchestrator | 12:36:45.734 STDOUT terraform:  + most_recent = true 2025-03-23 12:36:45.734558 | orchestrator | 12:36:45.734 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.734588 | orchestrator | 12:36:45.734 STDOUT terraform:  + protected = (known after apply) 2025-03-23 12:36:45.734618 | orchestrator | 12:36:45.734 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.734650 | orchestrator | 12:36:45.734 STDOUT terraform:  + schema = (known after apply) 2025-03-23 12:36:45.734682 | orchestrator | 12:36:45.734 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-23 12:36:45.734716 | orchestrator | 12:36:45.734 STDOUT terraform:  + tags = (known after apply) 2025-03-23 12:36:45.734749 | orchestrator | 12:36:45.734 STDOUT terraform:  + updated_at = (known after apply) 2025-03-23 12:36:45.734758 | orchestrator | 12:36:45.734 STDOUT terraform:  } 2025-03-23 12:36:45.734818 | orchestrator | 12:36:45.734 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-23 12:36:45.734846 | orchestrator | 12:36:45.734 STDOUT terraform:  # (config refers to values not yet known) 2025-03-23 12:36:45.734884 | orchestrator | 12:36:45.734 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-23 12:36:45.734918 | orchestrator | 12:36:45.734 STDOUT terraform:  + checksum = (known after apply) 2025-03-23 12:36:45.734943 | orchestrator | 12:36:45.734 STDOUT terraform:  + created_at = (known after apply) 2025-03-23 12:36:45.734973 | orchestrator | 12:36:45.734 STDOUT terraform:  + file = (known after apply) 2025-03-23 12:36:45.735009 | orchestrator | 12:36:45.734 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.735039 | orchestrator | 12:36:45.735 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.735067 | orchestrator | 12:36:45.735 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-23 12:36:45.735096 | orchestrator | 12:36:45.735 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-23 12:36:45.735120 | orchestrator | 12:36:45.735 STDOUT terraform:  + most_recent = true 2025-03-23 12:36:45.735149 | orchestrator | 12:36:45.735 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.735180 | orchestrator | 12:36:45.735 STDOUT terraform:  + protected = (known after apply) 2025-03-23 12:36:45.735209 | orchestrator | 12:36:45.735 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.735238 | orchestrator | 12:36:45.735 STDOUT terraform:  + schema = (known after apply) 2025-03-23 12:36:45.735270 | orchestrator | 12:36:45.735 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-23 12:36:45.735300 | orchestrator | 12:36:45.735 STDOUT terraform:  + tags = (known after apply) 2025-03-23 12:36:45.735329 | orchestrator | 12:36:45.735 STDOUT terraform:  + updated_at = (known after apply) 2025-03-23 12:36:45.735336 | orchestrator | 12:36:45.735 STDOUT terraform:  } 2025-03-23 12:36:45.735372 | orchestrator | 12:36:45.735 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-23 12:36:45.735401 | orchestrator | 12:36:45.735 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-23 12:36:45.735438 | orchestrator | 12:36:45.735 STDOUT terraform:  + content = (known after apply) 2025-03-23 12:36:45.735476 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-23 12:36:45.735510 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-23 12:36:45.735557 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-23 12:36:45.735596 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-23 12:36:45.735631 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-23 12:36:45.735669 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-23 12:36:45.735693 | orchestrator | 12:36:45.735 STDOUT terraform:  + directory_permission = "0777" 2025-03-23 12:36:45.735718 | orchestrator | 12:36:45.735 STDOUT terraform:  + file_permission = "0644" 2025-03-23 12:36:45.735757 | orchestrator | 12:36:45.735 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-23 12:36:45.735795 | orchestrator | 12:36:45.735 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.735803 | orchestrator | 12:36:45.735 STDOUT terraform:  } 2025-03-23 12:36:45.735832 | orchestrator | 12:36:45.735 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-23 12:36:45.735859 | orchestrator | 12:36:45.735 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-23 12:36:45.735895 | orchestrator | 12:36:45.735 STDOUT terraform:  + content = (known after apply) 2025-03-23 12:36:45.735932 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-23 12:36:45.735969 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-23 12:36:45.736012 | orchestrator | 12:36:45.735 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-23 12:36:45.736046 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-23 12:36:45.736083 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-23 12:36:45.736119 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-23 12:36:45.736143 | orchestrator | 12:36:45.736 STDOUT terraform:  + directory_permission = "0777" 2025-03-23 12:36:45.736168 | orchestrator | 12:36:45.736 STDOUT terraform:  + file_permission = "0644" 2025-03-23 12:36:45.736202 | orchestrator | 12:36:45.736 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-23 12:36:45.736238 | orchestrator | 12:36:45.736 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.736246 | orchestrator | 12:36:45.736 STDOUT terraform:  } 2025-03-23 12:36:45.736294 | orchestrator | 12:36:45.736 STDOUT terraform:  # local_file.inventory will be created 2025-03-23 12:36:45.736320 | orchestrator | 12:36:45.736 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-23 12:36:45.736357 | orchestrator | 12:36:45.736 STDOUT terraform:  + content = (known after apply) 2025-03-23 12:36:45.736394 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-23 12:36:45.736433 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-23 12:36:45.736470 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-23 12:36:45.736505 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-23 12:36:45.736573 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-23 12:36:45.736604 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-23 12:36:45.736631 | orchestrator | 12:36:45.736 STDOUT terraform:  + directory_permission = "0777" 2025-03-23 12:36:45.736658 | orchestrator | 12:36:45.736 STDOUT terraform:  + file_permission = "0644" 2025-03-23 12:36:45.736692 | orchestrator | 12:36:45.736 STDOUT terraform:  + filename = "inventory.ci" 2025-03-23 12:36:45.736729 | orchestrator | 12:36:45.736 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.736736 | orchestrator | 12:36:45.736 STDOUT terraform:  } 2025-03-23 12:36:45.736769 | orchestrator | 12:36:45.736 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-23 12:36:45.736799 | orchestrator | 12:36:45.736 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-23 12:36:45.736832 | orchestrator | 12:36:45.736 STDOUT terraform:  + content = (sensitive value) 2025-03-23 12:36:45.736869 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-23 12:36:45.736906 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-23 12:36:45.736944 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-23 12:36:45.736984 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-23 12:36:45.737017 | orchestrator | 12:36:45.736 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-23 12:36:45.737053 | orchestrator | 12:36:45.737 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-23 12:36:45.737079 | orchestrator | 12:36:45.737 STDOUT terraform:  + directory_permission = "0700" 2025-03-23 12:36:45.737104 | orchestrator | 12:36:45.737 STDOUT terraform:  + file_permission = "0600" 2025-03-23 12:36:45.737136 | orchestrator | 12:36:45.737 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-23 12:36:45.737173 | orchestrator | 12:36:45.737 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.737180 | orchestrator | 12:36:45.737 STDOUT terraform:  } 2025-03-23 12:36:45.737215 | orchestrator | 12:36:45.737 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-23 12:36:45.737247 | orchestrator | 12:36:45.737 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-23 12:36:45.737268 | orchestrator | 12:36:45.737 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.737276 | orchestrator | 12:36:45.737 STDOUT terraform:  } 2025-03-23 12:36:45.737328 | orchestrator | 12:36:45.737 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-23 12:36:45.737376 | orchestrator | 12:36:45.737 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-23 12:36:45.737407 | orchestrator | 12:36:45.737 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.737430 | orchestrator | 12:36:45.737 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.737464 | orchestrator | 12:36:45.737 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.737496 | orchestrator | 12:36:45.737 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.737539 | orchestrator | 12:36:45.737 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.737578 | orchestrator | 12:36:45.737 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-23 12:36:45.737610 | orchestrator | 12:36:45.737 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.737631 | orchestrator | 12:36:45.737 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.737652 | orchestrator | 12:36:45.737 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.737659 | orchestrator | 12:36:45.737 STDOUT terraform:  } 2025-03-23 12:36:45.737708 | orchestrator | 12:36:45.737 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-23 12:36:45.737756 | orchestrator | 12:36:45.737 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.737788 | orchestrator | 12:36:45.737 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.737811 | orchestrator | 12:36:45.737 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.737842 | orchestrator | 12:36:45.737 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.737875 | orchestrator | 12:36:45.737 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.737907 | orchestrator | 12:36:45.737 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.737948 | orchestrator | 12:36:45.737 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-23 12:36:45.737980 | orchestrator | 12:36:45.737 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.738002 | orchestrator | 12:36:45.737 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.738040 | orchestrator | 12:36:45.737 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.738069 | orchestrator | 12:36:45.738 STDOUT terraform:  } 2025-03-23 12:36:45.738096 | orchestrator | 12:36:45.738 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-23 12:36:45.738143 | orchestrator | 12:36:45.738 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.738175 | orchestrator | 12:36:45.738 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.738195 | orchestrator | 12:36:45.738 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.738228 | orchestrator | 12:36:45.738 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.738259 | orchestrator | 12:36:45.738 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.738291 | orchestrator | 12:36:45.738 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.738331 | orchestrator | 12:36:45.738 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-23 12:36:45.738365 | orchestrator | 12:36:45.738 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.738387 | orchestrator | 12:36:45.738 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.738410 | orchestrator | 12:36:45.738 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.738418 | orchestrator | 12:36:45.738 STDOUT terraform:  } 2025-03-23 12:36:45.738467 | orchestrator | 12:36:45.738 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-23 12:36:45.738513 | orchestrator | 12:36:45.738 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.738553 | orchestrator | 12:36:45.738 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.738574 | orchestrator | 12:36:45.738 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.738606 | orchestrator | 12:36:45.738 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.738638 | orchestrator | 12:36:45.738 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.738671 | orchestrator | 12:36:45.738 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.738714 | orchestrator | 12:36:45.738 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-23 12:36:45.738746 | orchestrator | 12:36:45.738 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.738767 | orchestrator | 12:36:45.738 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.738789 | orchestrator | 12:36:45.738 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.738796 | orchestrator | 12:36:45.738 STDOUT terraform:  } 2025-03-23 12:36:45.738846 | orchestrator | 12:36:45.738 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-23 12:36:45.738895 | orchestrator | 12:36:45.738 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.738924 | orchestrator | 12:36:45.738 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.738944 | orchestrator | 12:36:45.738 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.738979 | orchestrator | 12:36:45.738 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.739011 | orchestrator | 12:36:45.738 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.739043 | orchestrator | 12:36:45.739 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.739084 | orchestrator | 12:36:45.739 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-23 12:36:45.739116 | orchestrator | 12:36:45.739 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.739137 | orchestrator | 12:36:45.739 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.739159 | orchestrator | 12:36:45.739 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.739166 | orchestrator | 12:36:45.739 STDOUT terraform:  } 2025-03-23 12:36:45.739216 | orchestrator | 12:36:45.739 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-23 12:36:45.739264 | orchestrator | 12:36:45.739 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.739296 | orchestrator | 12:36:45.739 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.739317 | orchestrator | 12:36:45.739 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.739349 | orchestrator | 12:36:45.739 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.739381 | orchestrator | 12:36:45.739 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.739413 | orchestrator | 12:36:45.739 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.739454 | orchestrator | 12:36:45.739 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-23 12:36:45.739487 | orchestrator | 12:36:45.739 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.739511 | orchestrator | 12:36:45.739 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.739557 | orchestrator | 12:36:45.739 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.739607 | orchestrator | 12:36:45.739 STDOUT terraform:  } 2025-03-23 12:36:45.739616 | orchestrator | 12:36:45.739 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-23 12:36:45.739655 | orchestrator | 12:36:45.739 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-23 12:36:45.739686 | orchestrator | 12:36:45.739 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.739707 | orchestrator | 12:36:45.739 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.739739 | orchestrator | 12:36:45.739 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.739772 | orchestrator | 12:36:45.739 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.739810 | orchestrator | 12:36:45.739 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.740114 | orchestrator | 12:36:45.739 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-23 12:36:45.740128 | orchestrator | 12:36:45.739 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.740133 | orchestrator | 12:36:45.739 STDOUT terraform:  + size = 80 2025-03-23 12:36:45.740139 | orchestrator | 12:36:45.739 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.740144 | orchestrator | 12:36:45.739 STDOUT terraform:  } 2025-03-23 12:36:45.740149 | orchestrator | 12:36:45.739 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-23 12:36:45.740154 | orchestrator | 12:36:45.739 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.740159 | orchestrator | 12:36:45.739 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.740164 | orchestrator | 12:36:45.740 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.740172 | orchestrator | 12:36:45.740 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.740177 | orchestrator | 12:36:45.740 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.740184 | orchestrator | 12:36:45.740 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-23 12:36:45.740189 | orchestrator | 12:36:45.740 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.740195 | orchestrator | 12:36:45.740 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.740212 | orchestrator | 12:36:45.740 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741613 | orchestrator | 12:36:45.740 STDOUT terraform:  } 2025-03-23 12:36:45.741654 | orchestrator | 12:36:45.740 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-23 12:36:45.741660 | orchestrator | 12:36:45.740 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741666 | orchestrator | 12:36:45.740 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741671 | orchestrator | 12:36:45.740 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741676 | orchestrator | 12:36:45.740 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741681 | orchestrator | 12:36:45.740 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.741686 | orchestrator | 12:36:45.740 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-23 12:36:45.741691 | orchestrator | 12:36:45.740 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.741696 | orchestrator | 12:36:45.740 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.741701 | orchestrator | 12:36:45.740 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741707 | orchestrator | 12:36:45.740 STDOUT terraform:  } 2025-03-23 12:36:45.741712 | orchestrator | 12:36:45.740 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-23 12:36:45.741717 | orchestrator | 12:36:45.740 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741721 | orchestrator | 12:36:45.740 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741739 | orchestrator | 12:36:45.740 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741745 | orchestrator | 12:36:45.740 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741751 | orchestrator | 12:36:45.740 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.741756 | orchestrator | 12:36:45.740 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-23 12:36:45.741761 | orchestrator | 12:36:45.740 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.741766 | orchestrator | 12:36:45.740 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.741771 | orchestrator | 12:36:45.740 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741777 | orchestrator | 12:36:45.740 STDOUT terraform:  } 2025-03-23 12:36:45.741781 | orchestrator | 12:36:45.740 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-23 12:36:45.741787 | orchestrator | 12:36:45.740 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741792 | orchestrator | 12:36:45.740 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741797 | orchestrator | 12:36:45.740 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741802 | orchestrator | 12:36:45.740 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741807 | orchestrator | 12:36:45.740 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.741822 | orchestrator | 12:36:45.740 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-23 12:36:45.741828 | orchestrator | 12:36:45.741 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.741833 | orchestrator | 12:36:45.741 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.741837 | orchestrator | 12:36:45.741 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741843 | orchestrator | 12:36:45.741 STDOUT terraform:  } 2025-03-23 12:36:45.741852 | orchestrator | 12:36:45.741 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-23 12:36:45.741858 | orchestrator | 12:36:45.741 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741863 | orchestrator | 12:36:45.741 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741868 | orchestrator | 12:36:45.741 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741878 | orchestrator | 12:36:45.741 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741883 | orchestrator | 12:36:45.741 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.741888 | orchestrator | 12:36:45.741 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-23 12:36:45.741893 | orchestrator | 12:36:45.741 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.741898 | orchestrator | 12:36:45.741 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.741903 | orchestrator | 12:36:45.741 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741909 | orchestrator | 12:36:45.741 STDOUT terraform:  } 2025-03-23 12:36:45.741914 | orchestrator | 12:36:45.741 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-23 12:36:45.741919 | orchestrator | 12:36:45.741 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741924 | orchestrator | 12:36:45.741 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741929 | orchestrator | 12:36:45.741 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741934 | orchestrator | 12:36:45.741 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741940 | orchestrator | 12:36:45.741 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.741946 | orchestrator | 12:36:45.741 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-23 12:36:45.741951 | orchestrator | 12:36:45.741 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.741956 | orchestrator | 12:36:45.741 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.741961 | orchestrator | 12:36:45.741 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.741966 | orchestrator | 12:36:45.741 STDOUT terraform:  } 2025-03-23 12:36:45.741971 | orchestrator | 12:36:45.741 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-23 12:36:45.741975 | orchestrator | 12:36:45.741 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.741980 | orchestrator | 12:36:45.741 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.741989 | orchestrator | 12:36:45.741 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.741994 | orchestrator | 12:36:45.741 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.741999 | orchestrator | 12:36:45.741 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.742006 | orchestrator | 12:36:45.741 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-23 12:36:45.742141 | orchestrator | 12:36:45.741 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.742151 | orchestrator | 12:36:45.741 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.742156 | orchestrator | 12:36:45.741 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.742161 | orchestrator | 12:36:45.741 STDOUT terraform:  } 2025-03-23 12:36:45.742170 | orchestrator | 12:36:45.741 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-23 12:36:45.742179 | orchestrator | 12:36:45.741 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.742185 | orchestrator | 12:36:45.742 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.742192 | orchestrator | 12:36:45.742 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.742721 | orchestrator | 12:36:45.742 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.742735 | orchestrator | 12:36:45.742 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.742740 | orchestrator | 12:36:45.742 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-23 12:36:45.742746 | orchestrator | 12:36:45.742 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.742750 | orchestrator | 12:36:45.742 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.742756 | orchestrator | 12:36:45.742 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.742761 | orchestrator | 12:36:45.742 STDOUT terraform:  } 2025-03-23 12:36:45.742766 | orchestrator | 12:36:45.742 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-23 12:36:45.742771 | orchestrator | 12:36:45.742 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.742776 | orchestrator | 12:36:45.742 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.742785 | orchestrator | 12:36:45.742 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.742790 | orchestrator | 12:36:45.742 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.742795 | orchestrator | 12:36:45.742 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.742800 | orchestrator | 12:36:45.742 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-23 12:36:45.742805 | orchestrator | 12:36:45.742 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.742813 | orchestrator | 12:36:45.742 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.742877 | orchestrator | 12:36:45.742 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.742883 | orchestrator | 12:36:45.742 STDOUT terraform:  } 2025-03-23 12:36:45.742898 | orchestrator | 12:36:45.742 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-23 12:36:45.742906 | orchestrator | 12:36:45.742 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.742922 | orchestrator | 12:36:45.742 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.742943 | orchestrator | 12:36:45.742 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.742981 | orchestrator | 12:36:45.742 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.743025 | orchestrator | 12:36:45.742 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.743081 | orchestrator | 12:36:45.743 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-23 12:36:45.743112 | orchestrator | 12:36:45.743 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.743133 | orchestrator | 12:36:45.743 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.743170 | orchestrator | 12:36:45.743 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.743178 | orchestrator | 12:36:45.743 STDOUT terraform:  } 2025-03-23 12:36:45.743242 | orchestrator | 12:36:45.743 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-23 12:36:45.743285 | orchestrator | 12:36:45.743 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.743333 | orchestrator | 12:36:45.743 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.743357 | orchestrator | 12:36:45.743 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.743405 | orchestrator | 12:36:45.743 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.743438 | orchestrator | 12:36:45.743 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.743492 | orchestrator | 12:36:45.743 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-23 12:36:45.743538 | orchestrator | 12:36:45.743 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.743580 | orchestrator | 12:36:45.743 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.743601 | orchestrator | 12:36:45.743 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.743609 | orchestrator | 12:36:45.743 STDOUT terraform:  } 2025-03-23 12:36:45.743863 | orchestrator | 12:36:45.743 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-23 12:36:45.743894 | orchestrator | 12:36:45.743 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.743899 | orchestrator | 12:36:45.743 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.743904 | orchestrator | 12:36:45.743 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.743910 | orchestrator | 12:36:45.743 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.743915 | orchestrator | 12:36:45.743 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.743922 | orchestrator | 12:36:45.743 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-23 12:36:45.743958 | orchestrator | 12:36:45.743 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.743965 | orchestrator | 12:36:45.743 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.743982 | orchestrator | 12:36:45.743 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.743990 | orchestrator | 12:36:45.743 STDOUT terraform:  } 2025-03-23 12:36:45.744069 | orchestrator | 12:36:45.743 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-23 12:36:45.744111 | orchestrator | 12:36:45.744 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.744142 | orchestrator | 12:36:45.744 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.744163 | orchestrator | 12:36:45.744 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.744209 | orchestrator | 12:36:45.744 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.744333 | orchestrator | 12:36:45.744 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.744373 | orchestrator | 12:36:45.744 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-23 12:36:45.744405 | orchestrator | 12:36:45.744 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.744427 | orchestrator | 12:36:45.744 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.744449 | orchestrator | 12:36:45.744 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.744456 | orchestrator | 12:36:45.744 STDOUT terraform:  } 2025-03-23 12:36:45.744503 | orchestrator | 12:36:45.744 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-23 12:36:45.744555 | orchestrator | 12:36:45.744 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.744586 | orchestrator | 12:36:45.744 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.744607 | orchestrator | 12:36:45.744 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.744638 | orchestrator | 12:36:45.744 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.744670 | orchestrator | 12:36:45.744 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.744709 | orchestrator | 12:36:45.744 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-23 12:36:45.744740 | orchestrator | 12:36:45.744 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.744762 | orchestrator | 12:36:45.744 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.744784 | orchestrator | 12:36:45.744 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.744791 | orchestrator | 12:36:45.744 STDOUT terraform:  } 2025-03-23 12:36:45.744838 | orchestrator | 12:36:45.744 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-23 12:36:45.744880 | orchestrator | 12:36:45.744 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.744910 | orchestrator | 12:36:45.744 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.744931 | orchestrator | 12:36:45.744 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.744962 | orchestrator | 12:36:45.744 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.744993 | orchestrator | 12:36:45.744 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.745030 | orchestrator | 12:36:45.744 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-23 12:36:45.745061 | orchestrator | 12:36:45.745 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.745082 | orchestrator | 12:36:45.745 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.745103 | orchestrator | 12:36:45.745 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.745110 | orchestrator | 12:36:45.745 STDOUT terraform:  } 2025-03-23 12:36:45.745157 | orchestrator | 12:36:45.745 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-23 12:36:45.745200 | orchestrator | 12:36:45.745 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.745233 | orchestrator | 12:36:45.745 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.745256 | orchestrator | 12:36:45.745 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.745284 | orchestrator | 12:36:45.745 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.745316 | orchestrator | 12:36:45.745 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.745350 | orchestrator | 12:36:45.745 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-23 12:36:45.745381 | orchestrator | 12:36:45.745 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.745402 | orchestrator | 12:36:45.745 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.745423 | orchestrator | 12:36:45.745 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.745430 | orchestrator | 12:36:45.745 STDOUT terraform:  } 2025-03-23 12:36:45.745477 | orchestrator | 12:36:45.745 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-23 12:36:45.745520 | orchestrator | 12:36:45.745 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.745560 | orchestrator | 12:36:45.745 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.745579 | orchestrator | 12:36:45.745 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.745611 | orchestrator | 12:36:45.745 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.745641 | orchestrator | 12:36:45.745 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.745678 | orchestrator | 12:36:45.745 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-23 12:36:45.745710 | orchestrator | 12:36:45.745 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.745730 | orchestrator | 12:36:45.745 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.745752 | orchestrator | 12:36:45.745 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.745759 | orchestrator | 12:36:45.745 STDOUT terraform:  } 2025-03-23 12:36:45.745807 | orchestrator | 12:36:45.745 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-23 12:36:45.745848 | orchestrator | 12:36:45.745 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-23 12:36:45.745878 | orchestrator | 12:36:45.745 STDOUT terraform:  + attachment = (known after apply) 2025-03-23 12:36:45.745898 | orchestrator | 12:36:45.745 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.745930 | orchestrator | 12:36:45.745 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.745961 | orchestrator | 12:36:45.745 STDOUT terraform:  + metadata = (known after apply) 2025-03-23 12:36:45.745999 | orchestrator | 12:36:45.745 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-23 12:36:45.746050 | orchestrator | 12:36:45.745 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.746059 | orchestrator | 12:36:45.746 STDOUT terraform:  + size = 20 2025-03-23 12:36:45.746081 | orchestrator | 12:36:45.746 STDOUT terraform:  + volume_type = "ssd" 2025-03-23 12:36:45.746088 | orchestrator | 12:36:45.746 STDOUT terraform:  } 2025-03-23 12:36:45.746134 | orchestrator | 12:36:45.746 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-23 12:36:45.746175 | orchestrator | 12:36:45.746 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-23 12:36:45.746209 | orchestrator | 12:36:45.746 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.746239 | orchestrator | 12:36:45.746 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.746275 | orchestrator | 12:36:45.746 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.746310 | orchestrator | 12:36:45.746 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.746337 | orchestrator | 12:36:45.746 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.746349 | orchestrator | 12:36:45.746 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.746385 | orchestrator | 12:36:45.746 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.746416 | orchestrator | 12:36:45.746 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.746446 | orchestrator | 12:36:45.746 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-23 12:36:45.746470 | orchestrator | 12:36:45.746 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.746513 | orchestrator | 12:36:45.746 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.746560 | orchestrator | 12:36:45.746 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.746600 | orchestrator | 12:36:45.746 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.746619 | orchestrator | 12:36:45.746 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.746649 | orchestrator | 12:36:45.746 STDOUT terraform:  + name = "testbed-manager" 2025-03-23 12:36:45.746673 | orchestrator | 12:36:45.746 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.746708 | orchestrator | 12:36:45.746 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.746744 | orchestrator | 12:36:45.746 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.746777 | orchestrator | 12:36:45.746 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.746802 | orchestrator | 12:36:45.746 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.746838 | orchestrator | 12:36:45.746 STDOUT terraform:  + user_data = (known after apply) 2025-03-23 12:36:45.746846 | orchestrator | 12:36:45.746 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.746872 | orchestrator | 12:36:45.746 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.746898 | orchestrator | 12:36:45.746 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.746927 | orchestrator | 12:36:45.746 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.746968 | orchestrator | 12:36:45.746 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.746976 | orchestrator | 12:36:45.746 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.747018 | orchestrator | 12:36:45.746 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.747031 | orchestrator | 12:36:45.747 STDOUT terraform:  } 2025-03-23 12:36:45.747038 | orchestrator | 12:36:45.747 STDOUT terraform:  + network { 2025-03-23 12:36:45.747071 | orchestrator | 12:36:45.747 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.747091 | orchestrator | 12:36:45.747 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.747119 | orchestrator | 12:36:45.747 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.747159 | orchestrator | 12:36:45.747 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.747184 | orchestrator | 12:36:45.747 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.747214 | orchestrator | 12:36:45.747 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.747256 | orchestrator | 12:36:45.747 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.747262 | orchestrator | 12:36:45.747 STDOUT terraform:  } 2025-03-23 12:36:45.747268 | orchestrator | 12:36:45.747 STDOUT terraform:  } 2025-03-23 12:36:45.747308 | orchestrator | 12:36:45.747 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-23 12:36:45.747362 | orchestrator | 12:36:45.747 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.747386 | orchestrator | 12:36:45.747 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.747418 | orchestrator | 12:36:45.747 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.747458 | orchestrator | 12:36:45.747 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.747489 | orchestrator | 12:36:45.747 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.747512 | orchestrator | 12:36:45.747 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.747553 | orchestrator | 12:36:45.747 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.747577 | orchestrator | 12:36:45.747 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.747610 | orchestrator | 12:36:45.747 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.747650 | orchestrator | 12:36:45.747 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.747657 | orchestrator | 12:36:45.747 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.747696 | orchestrator | 12:36:45.747 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.747731 | orchestrator | 12:36:45.747 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.747766 | orchestrator | 12:36:45.747 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.747790 | orchestrator | 12:36:45.747 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.747833 | orchestrator | 12:36:45.747 STDOUT terraform:  + name = "testbed-node-0" 2025-03-23 12:36:45.747840 | orchestrator | 12:36:45.747 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.747876 | orchestrator | 12:36:45.747 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.747909 | orchestrator | 12:36:45.747 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.747936 | orchestrator | 12:36:45.747 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.747967 | orchestrator | 12:36:45.747 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.748020 | orchestrator | 12:36:45.747 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.748047 | orchestrator | 12:36:45.748 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.748054 | orchestrator | 12:36:45.748 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.748076 | orchestrator | 12:36:45.748 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.748109 | orchestrator | 12:36:45.748 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.748134 | orchestrator | 12:36:45.748 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.748171 | orchestrator | 12:36:45.748 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.748203 | orchestrator | 12:36:45.748 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.748210 | orchestrator | 12:36:45.748 STDOUT terraform:  } 2025-03-23 12:36:45.748227 | orchestrator | 12:36:45.748 STDOUT terraform:  + network { 2025-03-23 12:36:45.748256 | orchestrator | 12:36:45.748 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.748280 | orchestrator | 12:36:45.748 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.748309 | orchestrator | 12:36:45.748 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.748357 | orchestrator | 12:36:45.748 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.748364 | orchestrator | 12:36:45.748 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.748402 | orchestrator | 12:36:45.748 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.748444 | orchestrator | 12:36:45.748 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.748455 | orchestrator | 12:36:45.748 STDOUT terraform:  } 2025-03-23 12:36:45.748570 | orchestrator | 12:36:45.748 STDOUT terraform:  } 2025-03-23 12:36:45.748578 | orchestrator | 12:36:45.748 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-23 12:36:45.748613 | orchestrator | 12:36:45.748 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.748648 | orchestrator | 12:36:45.748 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.748696 | orchestrator | 12:36:45.748 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.748717 | orchestrator | 12:36:45.748 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.748752 | orchestrator | 12:36:45.748 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.748769 | orchestrator | 12:36:45.748 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.748794 | orchestrator | 12:36:45.748 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.748829 | orchestrator | 12:36:45.748 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.748869 | orchestrator | 12:36:45.748 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.748891 | orchestrator | 12:36:45.748 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.748912 | orchestrator | 12:36:45.748 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.748954 | orchestrator | 12:36:45.748 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.748978 | orchestrator | 12:36:45.748 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.749012 | orchestrator | 12:36:45.748 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.749038 | orchestrator | 12:36:45.749 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.749068 | orchestrator | 12:36:45.749 STDOUT terraform:  + name = "testbed-node-1" 2025-03-23 12:36:45.749092 | orchestrator | 12:36:45.749 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.749128 | orchestrator | 12:36:45.749 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.749162 | orchestrator | 12:36:45.749 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.749201 | orchestrator | 12:36:45.749 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.749221 | orchestrator | 12:36:45.749 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.749287 | orchestrator | 12:36:45.749 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.749307 | orchestrator | 12:36:45.749 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.749314 | orchestrator | 12:36:45.749 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.749338 | orchestrator | 12:36:45.749 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.749371 | orchestrator | 12:36:45.749 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.749392 | orchestrator | 12:36:45.749 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.749421 | orchestrator | 12:36:45.749 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.749462 | orchestrator | 12:36:45.749 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.749469 | orchestrator | 12:36:45.749 STDOUT terraform:  } 2025-03-23 12:36:45.749476 | orchestrator | 12:36:45.749 STDOUT terraform:  + network { 2025-03-23 12:36:45.749500 | orchestrator | 12:36:45.749 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.749559 | orchestrator | 12:36:45.749 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.749567 | orchestrator | 12:36:45.749 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.749598 | orchestrator | 12:36:45.749 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.749644 | orchestrator | 12:36:45.749 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.749652 | orchestrator | 12:36:45.749 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.749686 | orchestrator | 12:36:45.749 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.749694 | orchestrator | 12:36:45.749 STDOUT terraform:  } 2025-03-23 12:36:45.749729 | orchestrator | 12:36:45.749 STDOUT terraform:  } 2025-03-23 12:36:45.749755 | orchestrator | 12:36:45.749 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-23 12:36:45.749814 | orchestrator | 12:36:45.749 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.749821 | orchestrator | 12:36:45.749 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.749861 | orchestrator | 12:36:45.749 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.749901 | orchestrator | 12:36:45.749 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.749925 | orchestrator | 12:36:45.749 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.749949 | orchestrator | 12:36:45.749 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.749983 | orchestrator | 12:36:45.749 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.750007 | orchestrator | 12:36:45.749 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.750078 | orchestrator | 12:36:45.749 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.750115 | orchestrator | 12:36:45.750 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.750147 | orchestrator | 12:36:45.750 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.750182 | orchestrator | 12:36:45.750 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.750233 | orchestrator | 12:36:45.750 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.750278 | orchestrator | 12:36:45.750 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.750312 | orchestrator | 12:36:45.750 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.750342 | orchestrator | 12:36:45.750 STDOUT terraform:  + name = "testbed-node-2" 2025-03-23 12:36:45.750381 | orchestrator | 12:36:45.750 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.750416 | orchestrator | 12:36:45.750 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.750464 | orchestrator | 12:36:45.750 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.750486 | orchestrator | 12:36:45.750 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.750552 | orchestrator | 12:36:45.750 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.750604 | orchestrator | 12:36:45.750 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.750622 | orchestrator | 12:36:45.750 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.750646 | orchestrator | 12:36:45.750 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.750693 | orchestrator | 12:36:45.750 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.750717 | orchestrator | 12:36:45.750 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.750759 | orchestrator | 12:36:45.750 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.750789 | orchestrator | 12:36:45.750 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.750841 | orchestrator | 12:36:45.750 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.750847 | orchestrator | 12:36:45.750 STDOUT terraform:  } 2025-03-23 12:36:45.750865 | orchestrator | 12:36:45.750 STDOUT terraform:  + network { 2025-03-23 12:36:45.750886 | orchestrator | 12:36:45.750 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.750931 | orchestrator | 12:36:45.750 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.750961 | orchestrator | 12:36:45.750 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.751004 | orchestrator | 12:36:45.750 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.751036 | orchestrator | 12:36:45.750 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.751082 | orchestrator | 12:36:45.751 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.751113 | orchestrator | 12:36:45.751 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.751141 | orchestrator | 12:36:45.751 STDOUT terraform:  } 2025-03-23 12:36:45.751148 | orchestrator | 12:36:45.751 STDOUT terraform:  } 2025-03-23 12:36:45.751195 | orchestrator | 12:36:45.751 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-23 12:36:45.751249 | orchestrator | 12:36:45.751 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.751299 | orchestrator | 12:36:45.751 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.751332 | orchestrator | 12:36:45.751 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.751380 | orchestrator | 12:36:45.751 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.751415 | orchestrator | 12:36:45.751 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.751455 | orchestrator | 12:36:45.751 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.751473 | orchestrator | 12:36:45.751 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.751507 | orchestrator | 12:36:45.751 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.751576 | orchestrator | 12:36:45.751 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.751622 | orchestrator | 12:36:45.751 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.751643 | orchestrator | 12:36:45.751 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.751692 | orchestrator | 12:36:45.751 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.751728 | orchestrator | 12:36:45.751 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.751774 | orchestrator | 12:36:45.751 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.751799 | orchestrator | 12:36:45.751 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.751845 | orchestrator | 12:36:45.751 STDOUT terraform:  + name = "testbed-node-3" 2025-03-23 12:36:45.751870 | orchestrator | 12:36:45.751 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.751918 | orchestrator | 12:36:45.751 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.751952 | orchestrator | 12:36:45.751 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.751988 | orchestrator | 12:36:45.751 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.752024 | orchestrator | 12:36:45.751 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.752087 | orchestrator | 12:36:45.752 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.752105 | orchestrator | 12:36:45.752 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.752129 | orchestrator | 12:36:45.752 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.752156 | orchestrator | 12:36:45.752 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.752185 | orchestrator | 12:36:45.752 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.752228 | orchestrator | 12:36:45.752 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.752258 | orchestrator | 12:36:45.752 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.752297 | orchestrator | 12:36:45.752 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.752304 | orchestrator | 12:36:45.752 STDOUT terraform:  } 2025-03-23 12:36:45.752321 | orchestrator | 12:36:45.752 STDOUT terraform:  + network { 2025-03-23 12:36:45.752343 | orchestrator | 12:36:45.752 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.752373 | orchestrator | 12:36:45.752 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.752403 | orchestrator | 12:36:45.752 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.752461 | orchestrator | 12:36:45.752 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.752493 | orchestrator | 12:36:45.752 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.752524 | orchestrator | 12:36:45.752 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.752564 | orchestrator | 12:36:45.752 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.752572 | orchestrator | 12:36:45.752 STDOUT terraform:  } 2025-03-23 12:36:45.752589 | orchestrator | 12:36:45.752 STDOUT terraform:  } 2025-03-23 12:36:45.752632 | orchestrator | 12:36:45.752 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-23 12:36:45.752674 | orchestrator | 12:36:45.752 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.752708 | orchestrator | 12:36:45.752 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.752742 | orchestrator | 12:36:45.752 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.752776 | orchestrator | 12:36:45.752 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.752812 | orchestrator | 12:36:45.752 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.752851 | orchestrator | 12:36:45.752 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.752872 | orchestrator | 12:36:45.752 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.752906 | orchestrator | 12:36:45.752 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.752940 | orchestrator | 12:36:45.752 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.752970 | orchestrator | 12:36:45.752 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.752993 | orchestrator | 12:36:45.752 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.753028 | orchestrator | 12:36:45.752 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.753063 | orchestrator | 12:36:45.753 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.753099 | orchestrator | 12:36:45.753 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.753126 | orchestrator | 12:36:45.753 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.753156 | orchestrator | 12:36:45.753 STDOUT terraform:  + name = "testbed-node-4" 2025-03-23 12:36:45.753180 | orchestrator | 12:36:45.753 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.753216 | orchestrator | 12:36:45.753 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.753250 | orchestrator | 12:36:45.753 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.753273 | orchestrator | 12:36:45.753 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.753316 | orchestrator | 12:36:45.753 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.753357 | orchestrator | 12:36:45.753 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.753374 | orchestrator | 12:36:45.753 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.753398 | orchestrator | 12:36:45.753 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.753425 | orchestrator | 12:36:45.753 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.753455 | orchestrator | 12:36:45.753 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.753483 | orchestrator | 12:36:45.753 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.753517 | orchestrator | 12:36:45.753 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.753588 | orchestrator | 12:36:45.753 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.753596 | orchestrator | 12:36:45.753 STDOUT terraform:  } 2025-03-23 12:36:45.753613 | orchestrator | 12:36:45.753 STDOUT terraform:  + network { 2025-03-23 12:36:45.753635 | orchestrator | 12:36:45.753 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.753669 | orchestrator | 12:36:45.753 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.753696 | orchestrator | 12:36:45.753 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.753727 | orchestrator | 12:36:45.753 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.753759 | orchestrator | 12:36:45.753 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.753790 | orchestrator | 12:36:45.753 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.753834 | orchestrator | 12:36:45.753 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.753840 | orchestrator | 12:36:45.753 STDOUT terraform:  } 2025-03-23 12:36:45.753846 | orchestrator | 12:36:45.753 STDOUT terraform:  } 2025-03-23 12:36:45.753892 | orchestrator | 12:36:45.753 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-23 12:36:45.753923 | orchestrator | 12:36:45.753 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-23 12:36:45.753961 | orchestrator | 12:36:45.753 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-23 12:36:45.753995 | orchestrator | 12:36:45.753 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-23 12:36:45.754047 | orchestrator | 12:36:45.753 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-23 12:36:45.754076 | orchestrator | 12:36:45.754 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.754101 | orchestrator | 12:36:45.754 STDOUT terraform:  + availability_zone = "nova" 2025-03-23 12:36:45.754122 | orchestrator | 12:36:45.754 STDOUT terraform:  + config_drive = true 2025-03-23 12:36:45.754157 | orchestrator | 12:36:45.754 STDOUT terraform:  + created = (known after apply) 2025-03-23 12:36:45.754191 | orchestrator | 12:36:45.754 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-23 12:36:45.754222 | orchestrator | 12:36:45.754 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-23 12:36:45.754244 | orchestrator | 12:36:45.754 STDOUT terraform:  + force_delete = false 2025-03-23 12:36:45.754279 | orchestrator | 12:36:45.754 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.754313 | orchestrator | 12:36:45.754 STDOUT terraform:  + image_id = (known after apply) 2025-03-23 12:36:45.754347 | orchestrator | 12:36:45.754 STDOUT terraform:  + image_name = (known after apply) 2025-03-23 12:36:45.754372 | orchestrator | 12:36:45.754 STDOUT terraform:  + key_pair = "testbed" 2025-03-23 12:36:45.754403 | orchestrator | 12:36:45.754 STDOUT terraform:  + name = "testbed-node-5" 2025-03-23 12:36:45.754427 | orchestrator | 12:36:45.754 STDOUT terraform:  + power_state = "active" 2025-03-23 12:36:45.754461 | orchestrator | 12:36:45.754 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.754495 | orchestrator | 12:36:45.754 STDOUT terraform:  + security_groups = (known after apply) 2025-03-23 12:36:45.754518 | orchestrator | 12:36:45.754 STDOUT terraform:  + stop_before_destroy = false 2025-03-23 12:36:45.754570 | orchestrator | 12:36:45.754 STDOUT terraform:  + updated = (known after apply) 2025-03-23 12:36:45.754617 | orchestrator | 12:36:45.754 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-23 12:36:45.754635 | orchestrator | 12:36:45.754 STDOUT terraform:  + block_device { 2025-03-23 12:36:45.754659 | orchestrator | 12:36:45.754 STDOUT terraform:  + boot_index = 0 2025-03-23 12:36:45.754688 | orchestrator | 12:36:45.754 STDOUT terraform:  + delete_on_termination = false 2025-03-23 12:36:45.754717 | orchestrator | 12:36:45.754 STDOUT terraform:  + destination_type = "volume" 2025-03-23 12:36:45.754745 | orchestrator | 12:36:45.754 STDOUT terraform:  + multiattach = false 2025-03-23 12:36:45.754774 | orchestrator | 12:36:45.754 STDOUT terraform:  + source_type = "volume" 2025-03-23 12:36:45.754813 | orchestrator | 12:36:45.754 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.754825 | orchestrator | 12:36:45.754 STDOUT terraform:  } 2025-03-23 12:36:45.754843 | orchestrator | 12:36:45.754 STDOUT terraform:  + network { 2025-03-23 12:36:45.754879 | orchestrator | 12:36:45.754 STDOUT terraform:  + access_network = false 2025-03-23 12:36:45.754908 | orchestrator | 12:36:45.754 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-23 12:36:45.754938 | orchestrator | 12:36:45.754 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-23 12:36:45.754969 | orchestrator | 12:36:45.754 STDOUT terraform:  + mac = (known after apply) 2025-03-23 12:36:45.755002 | orchestrator | 12:36:45.754 STDOUT terraform:  + name = (known after apply) 2025-03-23 12:36:45.755033 | orchestrator | 12:36:45.754 STDOUT terraform:  + port = (known after apply) 2025-03-23 12:36:45.755064 | orchestrator | 12:36:45.755 STDOUT terraform:  + uuid = (known after apply) 2025-03-23 12:36:45.755071 | orchestrator | 12:36:45.755 STDOUT terraform:  } 2025-03-23 12:36:45.755088 | orchestrator | 12:36:45.755 STDOUT terraform:  } 2025-03-23 12:36:45.755123 | orchestrator | 12:36:45.755 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-23 12:36:45.755155 | orchestrator | 12:36:45.755 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-23 12:36:45.755182 | orchestrator | 12:36:45.755 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-23 12:36:45.755211 | orchestrator | 12:36:45.755 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.755232 | orchestrator | 12:36:45.755 STDOUT terraform:  + name = "testbed" 2025-03-23 12:36:45.755257 | orchestrator | 12:36:45.755 STDOUT terraform:  + private_key = (sensitive value) 2025-03-23 12:36:45.755285 | orchestrator | 12:36:45.755 STDOUT terraform:  + public_key = (known after apply) 2025-03-23 12:36:45.755313 | orchestrator | 12:36:45.755 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.755342 | orchestrator | 12:36:45.755 STDOUT terraform:  + user_id = (known after apply) 2025-03-23 12:36:45.755349 | orchestrator | 12:36:45.755 STDOUT terraform:  } 2025-03-23 12:36:45.755401 | orchestrator | 12:36:45.755 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-23 12:36:45.755448 | orchestrator | 12:36:45.755 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.755476 | orchestrator | 12:36:45.755 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.755504 | orchestrator | 12:36:45.755 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.755541 | orchestrator | 12:36:45.755 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.755570 | orchestrator | 12:36:45.755 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.755598 | orchestrator | 12:36:45.755 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.755605 | orchestrator | 12:36:45.755 STDOUT terraform:  } 2025-03-23 12:36:45.755656 | orchestrator | 12:36:45.755 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-23 12:36:45.755703 | orchestrator | 12:36:45.755 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.755731 | orchestrator | 12:36:45.755 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.755759 | orchestrator | 12:36:45.755 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.755787 | orchestrator | 12:36:45.755 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.755816 | orchestrator | 12:36:45.755 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.755847 | orchestrator | 12:36:45.755 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.755854 | orchestrator | 12:36:45.755 STDOUT terraform:  } 2025-03-23 12:36:45.755903 | orchestrator | 12:36:45.755 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-23 12:36:45.755958 | orchestrator | 12:36:45.755 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.755978 | orchestrator | 12:36:45.755 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.756009 | orchestrator | 12:36:45.755 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.756036 | orchestrator | 12:36:45.755 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.756064 | orchestrator | 12:36:45.756 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.756092 | orchestrator | 12:36:45.756 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.756148 | orchestrator | 12:36:45.756 STDOUT terraform:  } 2025-03-23 12:36:45.756155 | orchestrator | 12:36:45.756 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-23 12:36:45.756196 | orchestrator | 12:36:45.756 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.756225 | orchestrator | 12:36:45.756 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.756254 | orchestrator | 12:36:45.756 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.756282 | orchestrator | 12:36:45.756 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.756310 | orchestrator | 12:36:45.756 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.756342 | orchestrator | 12:36:45.756 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.756398 | orchestrator | 12:36:45.756 STDOUT terraform:  } 2025-03-23 12:36:45.756405 | orchestrator | 12:36:45.756 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-23 12:36:45.756446 | orchestrator | 12:36:45.756 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.756475 | orchestrator | 12:36:45.756 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.756505 | orchestrator | 12:36:45.756 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.756549 | orchestrator | 12:36:45.756 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.756557 | orchestrator | 12:36:45.756 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.756591 | orchestrator | 12:36:45.756 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.756647 | orchestrator | 12:36:45.756 STDOUT terraform:  } 2025-03-23 12:36:45.756654 | orchestrator | 12:36:45.756 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-23 12:36:45.756696 | orchestrator | 12:36:45.756 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.756724 | orchestrator | 12:36:45.756 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.756753 | orchestrator | 12:36:45.756 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.756780 | orchestrator | 12:36:45.756 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.756808 | orchestrator | 12:36:45.756 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.756836 | orchestrator | 12:36:45.756 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.756892 | orchestrator | 12:36:45.756 STDOUT terraform:  } 2025-03-23 12:36:45.756899 | orchestrator | 12:36:45.756 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-23 12:36:45.756941 | orchestrator | 12:36:45.756 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.756969 | orchestrator | 12:36:45.756 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.756998 | orchestrator | 12:36:45.756 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.757026 | orchestrator | 12:36:45.756 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.757052 | orchestrator | 12:36:45.757 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.757079 | orchestrator | 12:36:45.757 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.757135 | orchestrator | 12:36:45.757 STDOUT terraform:  } 2025-03-23 12:36:45.757142 | orchestrator | 12:36:45.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-23 12:36:45.757182 | orchestrator | 12:36:45.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.757210 | orchestrator | 12:36:45.757 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.757239 | orchestrator | 12:36:45.757 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.757267 | orchestrator | 12:36:45.757 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.757298 | orchestrator | 12:36:45.757 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.757325 | orchestrator | 12:36:45.757 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.757381 | orchestrator | 12:36:45.757 STDOUT terraform:  } 2025-03-23 12:36:45.757388 | orchestrator | 12:36:45.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-23 12:36:45.757429 | orchestrator | 12:36:45.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.757458 | orchestrator | 12:36:45.757 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.757497 | orchestrator | 12:36:45.757 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.757526 | orchestrator | 12:36:45.757 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.757571 | orchestrator | 12:36:45.757 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.757600 | orchestrator | 12:36:45.757 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.757655 | orchestrator | 12:36:45.757 STDOUT terraform:  } 2025-03-23 12:36:45.757662 | orchestrator | 12:36:45.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-23 12:36:45.757703 | orchestrator | 12:36:45.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.757732 | orchestrator | 12:36:45.757 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.757762 | orchestrator | 12:36:45.757 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.757790 | orchestrator | 12:36:45.757 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.757818 | orchestrator | 12:36:45.757 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.757846 | orchestrator | 12:36:45.757 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.757903 | orchestrator | 12:36:45.757 STDOUT terraform:  } 2025-03-23 12:36:45.757910 | orchestrator | 12:36:45.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-23 12:36:45.757952 | orchestrator | 12:36:45.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.757981 | orchestrator | 12:36:45.757 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.758009 | orchestrator | 12:36:45.757 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.758053 | orchestrator | 12:36:45.757 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.758082 | orchestrator | 12:36:45.758 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.758109 | orchestrator | 12:36:45.758 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.758166 | orchestrator | 12:36:45.758 STDOUT terraform:  } 2025-03-23 12:36:45.758173 | orchestrator | 12:36:45.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-23 12:36:45.758214 | orchestrator | 12:36:45.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.758244 | orchestrator | 12:36:45.758 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.758286 | orchestrator | 12:36:45.758 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.758314 | orchestrator | 12:36:45.758 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.758343 | orchestrator | 12:36:45.758 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.758375 | orchestrator | 12:36:45.758 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.758434 | orchestrator | 12:36:45.758 STDOUT terraform:  } 2025-03-23 12:36:45.758441 | orchestrator | 12:36:45.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-23 12:36:45.758482 | orchestrator | 12:36:45.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.758512 | orchestrator | 12:36:45.758 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.758547 | orchestrator | 12:36:45.758 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.758574 | orchestrator | 12:36:45.758 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.758603 | orchestrator | 12:36:45.758 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.758632 | orchestrator | 12:36:45.758 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.758688 | orchestrator | 12:36:45.758 STDOUT terraform:  } 2025-03-23 12:36:45.758695 | orchestrator | 12:36:45.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-23 12:36:45.758736 | orchestrator | 12:36:45.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.758764 | orchestrator | 12:36:45.758 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.758795 | orchestrator | 12:36:45.758 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.758816 | orchestrator | 12:36:45.758 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.758847 | orchestrator | 12:36:45.758 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.758875 | orchestrator | 12:36:45.758 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.758934 | orchestrator | 12:36:45.758 STDOUT terraform:  } 2025-03-23 12:36:45.758941 | orchestrator | 12:36:45.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-23 12:36:45.758985 | orchestrator | 12:36:45.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.759013 | orchestrator | 12:36:45.758 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.759042 | orchestrator | 12:36:45.759 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.759069 | orchestrator | 12:36:45.759 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.759097 | orchestrator | 12:36:45.759 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.759126 | orchestrator | 12:36:45.759 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.761665 | orchestrator | 12:36:45.759 STDOUT terraform:  } 2025-03-23 12:36:45.761697 | orchestrator | 12:36:45.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-23 12:36:45.761704 | orchestrator | 12:36:45.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.761709 | orchestrator | 12:36:45.759 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.761714 | orchestrator | 12:36:45.759 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.761719 | orchestrator | 12:36:45.759 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.761725 | orchestrator | 12:36:45.759 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.761730 | orchestrator | 12:36:45.759 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.761734 | orchestrator | 12:36:45.759 STDOUT terraform:  } 2025-03-23 12:36:45.761740 | orchestrator | 12:36:45.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-23 12:36:45.761745 | orchestrator | 12:36:45.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.761750 | orchestrator | 12:36:45.759 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.761760 | orchestrator | 12:36:45.759 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.761765 | orchestrator | 12:36:45.759 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.761770 | orchestrator | 12:36:45.759 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.761775 | orchestrator | 12:36:45.759 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.761781 | orchestrator | 12:36:45.759 STDOUT terraform:  } 2025-03-23 12:36:45.761786 | orchestrator | 12:36:45.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-23 12:36:45.761791 | orchestrator | 12:36:45.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-23 12:36:45.761796 | orchestrator | 12:36:45.759 STDOUT terraform:  + device = (known after apply) 2025-03-23 12:36:45.761801 | orchestrator | 12:36:45.759 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.761817 | orchestrator | 12:36:45.759 STDOUT terraform:  + instance_id = (known after apply) 2025-03-23 12:36:45.761822 | orchestrator | 12:36:45.759 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.761827 | orchestrator | 12:36:45.759 STDOUT terraform:  + volume_id = (known after apply) 2025-03-23 12:36:45.761832 | orchestrator | 12:36:45.759 STDOUT terraform:  } 2025-03-23 12:36:45.761837 | orchestrator | 12:36:45.759 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-23 12:36:45.761843 | orchestrator | 12:36:45.759 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-23 12:36:45.761848 | orchestrator | 12:36:45.759 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-23 12:36:45.761853 | orchestrator | 12:36:45.759 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-23 12:36:45.761858 | orchestrator | 12:36:45.759 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.761863 | orchestrator | 12:36:45.759 STDOUT terraform:  + port_id = (known after apply) 2025-03-23 12:36:45.761871 | orchestrator | 12:36:45.759 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.761876 | orchestrator | 12:36:45.760 STDOUT terraform:  } 2025-03-23 12:36:45.761881 | orchestrator | 12:36:45.760 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-23 12:36:45.761886 | orchestrator | 12:36:45.760 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-23 12:36:45.761891 | orchestrator | 12:36:45.760 STDOUT terraform:  + address = (known after apply) 2025-03-23 12:36:45.761896 | orchestrator | 12:36:45.760 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.761906 | orchestrator | 12:36:45.760 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-23 12:36:45.761911 | orchestrator | 12:36:45.760 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.761916 | orchestrator | 12:36:45.760 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-23 12:36:45.761921 | orchestrator | 12:36:45.760 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.761926 | orchestrator | 12:36:45.760 STDOUT terraform:  + pool = "public" 2025-03-23 12:36:45.761931 | orchestrator | 12:36:45.760 STDOUT terraform:  + port_id = (known after apply) 2025-03-23 12:36:45.761936 | orchestrator | 12:36:45.760 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.761941 | orchestrator | 12:36:45.760 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.761946 | orchestrator | 12:36:45.760 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.761951 | orchestrator | 12:36:45.760 STDOUT terraform:  } 2025-03-23 12:36:45.761956 | orchestrator | 12:36:45.760 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-23 12:36:45.761961 | orchestrator | 12:36:45.760 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-23 12:36:45.761966 | orchestrator | 12:36:45.760 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.761971 | orchestrator | 12:36:45.760 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.761979 | orchestrator | 12:36:45.760 STDOUT terraform:  + availability_zone_hints = [ 2025-03-23 12:36:45.761984 | orchestrator | 12:36:45.760 STDOUT terraform:  + "nova", 2025-03-23 12:36:45.761989 | orchestrator | 12:36:45.760 STDOUT terraform:  ] 2025-03-23 12:36:45.761994 | orchestrator | 12:36:45.760 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-23 12:36:45.761999 | orchestrator | 12:36:45.760 STDOUT terraform:  + external = (known after apply) 2025-03-23 12:36:45.762004 | orchestrator | 12:36:45.760 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.762009 | orchestrator | 12:36:45.760 STDOUT terraform:  + mtu = (known after apply) 2025-03-23 12:36:45.762032 | orchestrator | 12:36:45.760 STDOUT terraform:  + name = "net-testbed-management" 2025-03-23 12:36:45.762038 | orchestrator | 12:36:45.760 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.762043 | orchestrator | 12:36:45.760 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.762048 | orchestrator | 12:36:45.760 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.762053 | orchestrator | 12:36:45.760 STDOUT terraform:  + shared = (known after apply) 2025-03-23 12:36:45.762057 | orchestrator | 12:36:45.760 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.762062 | orchestrator | 12:36:45.760 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-23 12:36:45.762067 | orchestrator | 12:36:45.760 STDOUT terraform:  + segments (known after apply) 2025-03-23 12:36:45.762072 | orchestrator | 12:36:45.760 STDOUT terraform:  } 2025-03-23 12:36:45.762077 | orchestrator | 12:36:45.760 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-23 12:36:45.762082 | orchestrator | 12:36:45.760 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-23 12:36:45.762087 | orchestrator | 12:36:45.760 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.762092 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.762097 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.762102 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.762107 | orchestrator | 12:36:45.761 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.762115 | orchestrator | 12:36:45.761 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.762150 | orchestrator | 12:36:45.761 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.762156 | orchestrator | 12:36:45.761 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.762161 | orchestrator | 12:36:45.761 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.762166 | orchestrator | 12:36:45.761 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.762173 | orchestrator | 12:36:45.761 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.762182 | orchestrator | 12:36:45.761 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.762187 | orchestrator | 12:36:45.761 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.762191 | orchestrator | 12:36:45.761 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.762196 | orchestrator | 12:36:45.761 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.762201 | orchestrator | 12:36:45.761 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.762206 | orchestrator | 12:36:45.761 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.762211 | orchestrator | 12:36:45.761 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.762216 | orchestrator | 12:36:45.761 STDOUT terraform:  } 2025-03-23 12:36:45.762221 | orchestrator | 12:36:45.761 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.762226 | orchestrator | 12:36:45.761 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.762231 | orchestrator | 12:36:45.761 STDOUT terraform:  } 2025-03-23 12:36:45.762236 | orchestrator | 12:36:45.761 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.762241 | orchestrator | 12:36:45.761 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.762245 | orchestrator | 12:36:45.761 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-23 12:36:45.762250 | orchestrator | 12:36:45.761 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.762255 | orchestrator | 12:36:45.761 STDOUT terraform:  } 2025-03-23 12:36:45.762260 | orchestrator | 12:36:45.761 STDOUT terraform:  } 2025-03-23 12:36:45.762265 | orchestrator | 12:36:45.761 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-23 12:36:45.762270 | orchestrator | 12:36:45.761 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.762275 | orchestrator | 12:36:45.761 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.762280 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.762285 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.762290 | orchestrator | 12:36:45.761 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.762295 | orchestrator | 12:36:45.761 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.762299 | orchestrator | 12:36:45.761 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.762304 | orchestrator | 12:36:45.761 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.762309 | orchestrator | 12:36:45.762 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.762314 | orchestrator | 12:36:45.762 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.762319 | orchestrator | 12:36:45.762 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.762326 | orchestrator | 12:36:45.762 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.762335 | orchestrator | 12:36:45.762 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.762340 | orchestrator | 12:36:45.762 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.762345 | orchestrator | 12:36:45.762 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.762350 | orchestrator | 12:36:45.762 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.762354 | orchestrator | 12:36:45.762 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.762359 | orchestrator | 12:36:45.762 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.762366 | orchestrator | 12:36:45.762 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.763152 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.763178 | orchestrator | 12:36:45.762 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.763200 | orchestrator | 12:36:45.762 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.765122 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.765156 | orchestrator | 12:36:45.762 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765162 | orchestrator | 12:36:45.762 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.765167 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.765172 | orchestrator | 12:36:45.762 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765178 | orchestrator | 12:36:45.762 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.765183 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.765197 | orchestrator | 12:36:45.762 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.765202 | orchestrator | 12:36:45.762 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.765207 | orchestrator | 12:36:45.762 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-23 12:36:45.765212 | orchestrator | 12:36:45.762 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.765217 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.765227 | orchestrator | 12:36:45.762 STDOUT terraform:  } 2025-03-23 12:36:45.765239 | orchestrator | 12:36:45.762 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-23 12:36:45.765246 | orchestrator | 12:36:45.763 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.765251 | orchestrator | 12:36:45.763 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.765259 | orchestrator | 12:36:45.763 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.765264 | orchestrator | 12:36:45.763 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.765271 | orchestrator | 12:36:45.763 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.765276 | orchestrator | 12:36:45.763 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.765281 | orchestrator | 12:36:45.763 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.765294 | orchestrator | 12:36:45.763 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.765299 | orchestrator | 12:36:45.763 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.765304 | orchestrator | 12:36:45.763 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.765309 | orchestrator | 12:36:45.763 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.765314 | orchestrator | 12:36:45.763 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.765319 | orchestrator | 12:36:45.763 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.765324 | orchestrator | 12:36:45.763 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.765328 | orchestrator | 12:36:45.763 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.765333 | orchestrator | 12:36:45.763 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.765338 | orchestrator | 12:36:45.764 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.765343 | orchestrator | 12:36:45.764 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765348 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.765353 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765358 | orchestrator | 12:36:45.764 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765363 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.765368 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765372 | orchestrator | 12:36:45.764 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765377 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.765382 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765387 | orchestrator | 12:36:45.764 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765392 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.765397 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765402 | orchestrator | 12:36:45.764 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.765407 | orchestrator | 12:36:45.764 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.765412 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-23 12:36:45.765417 | orchestrator | 12:36:45.764 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.765422 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765427 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765432 | orchestrator | 12:36:45.764 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-23 12:36:45.765437 | orchestrator | 12:36:45.764 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.765449 | orchestrator | 12:36:45.764 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.765454 | orchestrator | 12:36:45.764 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.765459 | orchestrator | 12:36:45.764 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.765463 | orchestrator | 12:36:45.764 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.765468 | orchestrator | 12:36:45.764 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.765473 | orchestrator | 12:36:45.764 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.765478 | orchestrator | 12:36:45.764 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.765483 | orchestrator | 12:36:45.764 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.765489 | orchestrator | 12:36:45.764 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.765494 | orchestrator | 12:36:45.764 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.765499 | orchestrator | 12:36:45.764 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.765504 | orchestrator | 12:36:45.764 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.765509 | orchestrator | 12:36:45.764 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.765516 | orchestrator | 12:36:45.764 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.765521 | orchestrator | 12:36:45.764 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.765526 | orchestrator | 12:36:45.764 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.765545 | orchestrator | 12:36:45.764 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765550 | orchestrator | 12:36:45.764 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.765555 | orchestrator | 12:36:45.764 STDOUT terraform:  } 2025-03-23 12:36:45.765560 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765565 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.765570 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765575 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765580 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.765585 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765589 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765594 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.765599 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765604 | orchestrator | 12:36:45.765 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.765609 | orchestrator | 12:36:45.765 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.765614 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-23 12:36:45.765622 | orchestrator | 12:36:45.765 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.765627 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765632 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765637 | orchestrator | 12:36:45.765 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-23 12:36:45.765642 | orchestrator | 12:36:45.765 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.765647 | orchestrator | 12:36:45.765 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.765652 | orchestrator | 12:36:45.765 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.765662 | orchestrator | 12:36:45.765 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.765699 | orchestrator | 12:36:45.765 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.765706 | orchestrator | 12:36:45.765 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.765711 | orchestrator | 12:36:45.765 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.765716 | orchestrator | 12:36:45.765 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.765720 | orchestrator | 12:36:45.765 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.765725 | orchestrator | 12:36:45.765 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.765730 | orchestrator | 12:36:45.765 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.765737 | orchestrator | 12:36:45.765 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.765742 | orchestrator | 12:36:45.765 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.765749 | orchestrator | 12:36:45.765 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.765789 | orchestrator | 12:36:45.765 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.765823 | orchestrator | 12:36:45.765 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.765858 | orchestrator | 12:36:45.765 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.765865 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765903 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.765932 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765938 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.765959 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.765988 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.765998 | orchestrator | 12:36:45.765 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.766005 | orchestrator | 12:36:45.765 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.766011 | orchestrator | 12:36:45.765 STDOUT terraform:  } 2025-03-23 12:36:45.766097 | orchestrator | 12:36:45.766 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.766135 | orchestrator | 12:36:45.766 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.766163 | orchestrator | 12:36:45.766 STDOUT terraform:  } 2025-03-23 12:36:45.766170 | orchestrator | 12:36:45.766 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.766198 | orchestrator | 12:36:45.766 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.766206 | orchestrator | 12:36:45.766 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-23 12:36:45.766227 | orchestrator | 12:36:45.766 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.766246 | orchestrator | 12:36:45.766 STDOUT terraform:  } 2025-03-23 12:36:45.766253 | orchestrator | 12:36:45.766 STDOUT terraform:  } 2025-03-23 12:36:45.766293 | orchestrator | 12:36:45.766 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-23 12:36:45.766337 | orchestrator | 12:36:45.766 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.766372 | orchestrator | 12:36:45.766 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.766408 | orchestrator | 12:36:45.766 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.766443 | orchestrator | 12:36:45.766 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.766480 | orchestrator | 12:36:45.766 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.766516 | orchestrator | 12:36:45.766 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.766571 | orchestrator | 12:36:45.766 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.766608 | orchestrator | 12:36:45.766 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.766646 | orchestrator | 12:36:45.766 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.766680 | orchestrator | 12:36:45.766 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.766714 | orchestrator | 12:36:45.766 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.766750 | orchestrator | 12:36:45.766 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.766785 | orchestrator | 12:36:45.766 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.766820 | orchestrator | 12:36:45.766 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.766856 | orchestrator | 12:36:45.766 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.766890 | orchestrator | 12:36:45.766 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.766926 | orchestrator | 12:36:45.766 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.766933 | orchestrator | 12:36:45.766 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.766971 | orchestrator | 12:36:45.766 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.767000 | orchestrator | 12:36:45.766 STDOUT terraform:  } 2025-03-23 12:36:45.767024 | orchestrator | 12:36:45.766 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.767031 | orchestrator | 12:36:45.766 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.767051 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767058 | orchestrator | 12:36:45.767 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.767078 | orchestrator | 12:36:45.767 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.767106 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767112 | orchestrator | 12:36:45.767 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.767133 | orchestrator | 12:36:45.767 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.767165 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767172 | orchestrator | 12:36:45.767 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.767202 | orchestrator | 12:36:45.767 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.767208 | orchestrator | 12:36:45.767 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-23 12:36:45.767229 | orchestrator | 12:36:45.767 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.767248 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767255 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767292 | orchestrator | 12:36:45.767 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-23 12:36:45.767338 | orchestrator | 12:36:45.767 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-23 12:36:45.767374 | orchestrator | 12:36:45.767 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.767409 | orchestrator | 12:36:45.767 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-23 12:36:45.767445 | orchestrator | 12:36:45.767 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-23 12:36:45.767483 | orchestrator | 12:36:45.767 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.767520 | orchestrator | 12:36:45.767 STDOUT terraform:  + device_id = (known after apply) 2025-03-23 12:36:45.767556 | orchestrator | 12:36:45.767 STDOUT terraform:  + device_owner = (known after apply) 2025-03-23 12:36:45.767590 | orchestrator | 12:36:45.767 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-23 12:36:45.767625 | orchestrator | 12:36:45.767 STDOUT terraform:  + dns_name = (known after apply) 2025-03-23 12:36:45.767662 | orchestrator | 12:36:45.767 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.767698 | orchestrator | 12:36:45.767 STDOUT terraform:  + mac_address = (known after apply) 2025-03-23 12:36:45.767733 | orchestrator | 12:36:45.767 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.767767 | orchestrator | 12:36:45.767 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-23 12:36:45.767804 | orchestrator | 12:36:45.767 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-23 12:36:45.767839 | orchestrator | 12:36:45.767 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.767875 | orchestrator | 12:36:45.767 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-23 12:36:45.767911 | orchestrator | 12:36:45.767 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.767937 | orchestrator | 12:36:45.767 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.767967 | orchestrator | 12:36:45.767 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-23 12:36:45.767987 | orchestrator | 12:36:45.767 STDOUT terraform:  } 2025-03-23 12:36:45.767993 | orchestrator | 12:36:45.767 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.768014 | orchestrator | 12:36:45.767 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-23 12:36:45.768042 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768049 | orchestrator | 12:36:45.768 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.768069 | orchestrator | 12:36:45.768 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-23 12:36:45.768097 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768104 | orchestrator | 12:36:45.768 STDOUT terraform:  + allowed_address_pairs { 2025-03-23 12:36:45.768124 | orchestrator | 12:36:45.768 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-23 12:36:45.768155 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768162 | orchestrator | 12:36:45.768 STDOUT terraform:  + binding (known after apply) 2025-03-23 12:36:45.768190 | orchestrator | 12:36:45.768 STDOUT terraform:  + fixed_ip { 2025-03-23 12:36:45.768197 | orchestrator | 12:36:45.768 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-23 12:36:45.768219 | orchestrator | 12:36:45.768 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.768244 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768251 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768288 | orchestrator | 12:36:45.768 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-23 12:36:45.768337 | orchestrator | 12:36:45.768 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-23 12:36:45.768344 | orchestrator | 12:36:45.768 STDOUT terraform:  + force_destroy = false 2025-03-23 12:36:45.768381 | orchestrator | 12:36:45.768 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.768409 | orchestrator | 12:36:45.768 STDOUT terraform:  + port_id = (known after apply) 2025-03-23 12:36:45.768437 | orchestrator | 12:36:45.768 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.768481 | orchestrator | 12:36:45.768 STDOUT terraform:  + router_id = (known after apply) 2025-03-23 12:36:45.768510 | orchestrator | 12:36:45.768 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-23 12:36:45.768581 | orchestrator | 12:36:45.768 STDOUT terraform:  } 2025-03-23 12:36:45.768588 | orchestrator | 12:36:45.768 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-23 12:36:45.768618 | orchestrator | 12:36:45.768 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-23 12:36:45.769212 | orchestrator | 12:36:45.768 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-23 12:36:45.769261 | orchestrator | 12:36:45.768 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.769268 | orchestrator | 12:36:45.768 STDOUT terraform:  + availability_zone_hints = [ 2025-03-23 12:36:45.769274 | orchestrator | 12:36:45.768 STDOUT terraform:  + "nova", 2025-03-23 12:36:45.769280 | orchestrator | 12:36:45.768 STDOUT terraform:  ] 2025-03-23 12:36:45.769285 | orchestrator | 12:36:45.768 STDOUT terraform:  + distributed = (known after apply) 2025-03-23 12:36:45.769290 | orchestrator | 12:36:45.768 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-23 12:36:45.769299 | orchestrator | 12:36:45.768 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-23 12:36:45.769305 | orchestrator | 12:36:45.768 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.769310 | orchestrator | 12:36:45.768 STDOUT terraform:  + name = "testbed" 2025-03-23 12:36:45.769315 | orchestrator | 12:36:45.768 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.769320 | orchestrator | 12:36:45.768 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.769325 | orchestrator | 12:36:45.769 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-23 12:36:45.769330 | orchestrator | 12:36:45.769 STDOUT terraform:  } 2025-03-23 12:36:45.769335 | orchestrator | 12:36:45.769 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-23 12:36:45.769343 | orchestrator | 12:36:45.769 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-23 12:36:45.769351 | orchestrator | 12:36:45.769 STDOUT terraform:  + description = "ssh" 2025-03-23 12:36:45.769368 | orchestrator | 12:36:45.769 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.769373 | orchestrator | 12:36:45.769 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.769378 | orchestrator | 12:36:45.769 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.769383 | orchestrator | 12:36:45.769 STDOUT terraform:  + port_range_max = 22 2025-03-23 12:36:45.769389 | orchestrator | 12:36:45.769 STDOUT terraform:  + port_range_min = 22 2025-03-23 12:36:45.769395 | orchestrator | 12:36:45.769 STDOUT terraform:  + protocol = "tcp" 2025-03-23 12:36:45.769441 | orchestrator | 12:36:45.769 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.769450 | orchestrator | 12:36:45.769 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.769467 | orchestrator | 12:36:45.769 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.769513 | orchestrator | 12:36:45.769 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.769552 | orchestrator | 12:36:45.769 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.769578 | orchestrator | 12:36:45.769 STDOUT terraform:  } 2025-03-23 12:36:45.769632 | orchestrator | 12:36:45.769 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-23 12:36:45.769698 | orchestrator | 12:36:45.769 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-23 12:36:45.769742 | orchestrator | 12:36:45.769 STDOUT terraform:  + description = "wireguard" 2025-03-23 12:36:45.769760 | orchestrator | 12:36:45.769 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.769781 | orchestrator | 12:36:45.769 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.769856 | orchestrator | 12:36:45.769 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.770066 | orchestrator | 12:36:45.769 STDOUT terraform:  + port_range_max = 51820 2025-03-23 12:36:45.770076 | orchestrator | 12:36:45.769 STDOUT terraform:  + port_range_min = 51820 2025-03-23 12:36:45.770084 | orchestrator | 12:36:45.769 STDOUT terraform:  + protocol = "udp" 2025-03-23 12:36:45.770141 | orchestrator | 12:36:45.769 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.770151 | orchestrator | 12:36:45.769 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.770156 | orchestrator | 12:36:45.769 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.770172 | orchestrator | 12:36:45.769 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.770177 | orchestrator | 12:36:45.770 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.770182 | orchestrator | 12:36:45.770 STDOUT terraform:  } 2025-03-23 12:36:45.770190 | orchestrator | 12:36:45.770 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-23 12:36:45.770216 | orchestrator | 12:36:45.770 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-23 12:36:45.770224 | orchestrator | 12:36:45.770 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.770230 | orchestrator | 12:36:45.770 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.770287 | orchestrator | 12:36:45.770 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.770295 | orchestrator | 12:36:45.770 STDOUT terraform:  + protocol = "tcp" 2025-03-23 12:36:45.770328 | orchestrator | 12:36:45.770 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.770374 | orchestrator | 12:36:45.770 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.770404 | orchestrator | 12:36:45.770 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-23 12:36:45.770448 | orchestrator | 12:36:45.770 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.770478 | orchestrator | 12:36:45.770 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.770485 | orchestrator | 12:36:45.770 STDOUT terraform:  } 2025-03-23 12:36:45.770574 | orchestrator | 12:36:45.770 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-23 12:36:45.770631 | orchestrator | 12:36:45.770 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-23 12:36:45.770671 | orchestrator | 12:36:45.770 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.770692 | orchestrator | 12:36:45.770 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.770731 | orchestrator | 12:36:45.770 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.770760 | orchestrator | 12:36:45.770 STDOUT terraform:  + protocol = "udp" 2025-03-23 12:36:45.770793 | orchestrator | 12:36:45.770 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.770834 | orchestrator | 12:36:45.770 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.770866 | orchestrator | 12:36:45.770 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-23 12:36:45.770908 | orchestrator | 12:36:45.770 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.770942 | orchestrator | 12:36:45.770 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.770959 | orchestrator | 12:36:45.770 STDOUT terraform:  } 2025-03-23 12:36:45.771016 | orchestrator | 12:36:45.770 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-23 12:36:45.771084 | orchestrator | 12:36:45.771 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-23 12:36:45.771109 | orchestrator | 12:36:45.771 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.771143 | orchestrator | 12:36:45.771 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.771182 | orchestrator | 12:36:45.771 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.771209 | orchestrator | 12:36:45.771 STDOUT terraform:  + protocol = "icmp" 2025-03-23 12:36:45.771240 | orchestrator | 12:36:45.771 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.771270 | orchestrator | 12:36:45.771 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.771309 | orchestrator | 12:36:45.771 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.771339 | orchestrator | 12:36:45.771 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.771383 | orchestrator | 12:36:45.771 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.771390 | orchestrator | 12:36:45.771 STDOUT terraform:  } 2025-03-23 12:36:45.771457 | orchestrator | 12:36:45.771 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-23 12:36:45.771519 | orchestrator | 12:36:45.771 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-23 12:36:45.771556 | orchestrator | 12:36:45.771 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.771575 | orchestrator | 12:36:45.771 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.771603 | orchestrator | 12:36:45.771 STDOUT terraform:  2025-03-23 12:36:45.771677 | orchestrator | 12:36:45.771 STDOUT terraform: + id = (known after apply) 2025-03-23 12:36:45.771700 | orchestrator | 12:36:45.771 STDOUT terraform:  + protocol = "tcp" 2025-03-23 12:36:45.771731 | orchestrator | 12:36:45.771 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.771776 | orchestrator | 12:36:45.771 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.771800 | orchestrator | 12:36:45.771 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.771847 | orchestrator | 12:36:45.771 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.771879 | orchestrator | 12:36:45.771 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.771909 | orchestrator | 12:36:45.771 STDOUT terraform:  } 2025-03-23 12:36:45.771958 | orchestrator | 12:36:45.771 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-23 12:36:45.772023 | orchestrator | 12:36:45.771 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-23 12:36:45.772061 | orchestrator | 12:36:45.772 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.772082 | orchestrator | 12:36:45.772 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.772114 | orchestrator | 12:36:45.772 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.772149 | orchestrator | 12:36:45.772 STDOUT terraform:  + protocol = "udp" 2025-03-23 12:36:45.772181 | orchestrator | 12:36:45.772 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.772223 | orchestrator | 12:36:45.772 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.772248 | orchestrator | 12:36:45.772 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.772292 | orchestrator | 12:36:45.772 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.772323 | orchestrator | 12:36:45.772 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.772331 | orchestrator | 12:36:45.772 STDOUT terraform:  } 2025-03-23 12:36:45.772396 | orchestrator | 12:36:45.772 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-23 12:36:45.772447 | orchestrator | 12:36:45.772 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-23 12:36:45.772486 | orchestrator | 12:36:45.772 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.772506 | orchestrator | 12:36:45.772 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.772578 | orchestrator | 12:36:45.772 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.772620 | orchestrator | 12:36:45.772 STDOUT terraform:  + protocol = "icmp" 2025-03-23 12:36:45.772629 | orchestrator | 12:36:45.772 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.772651 | orchestrator | 12:36:45.772 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.772675 | orchestrator | 12:36:45.772 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.772708 | orchestrator | 12:36:45.772 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.772735 | orchestrator | 12:36:45.772 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.772743 | orchestrator | 12:36:45.772 STDOUT terraform:  } 2025-03-23 12:36:45.772794 | orchestrator | 12:36:45.772 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-23 12:36:45.772857 | orchestrator | 12:36:45.772 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-23 12:36:45.772877 | orchestrator | 12:36:45.772 STDOUT terraform:  + description = "vrrp" 2025-03-23 12:36:45.772902 | orchestrator | 12:36:45.772 STDOUT terraform:  + direction = "ingress" 2025-03-23 12:36:45.772922 | orchestrator | 12:36:45.772 STDOUT terraform:  + ethertype = "IPv4" 2025-03-23 12:36:45.772955 | orchestrator | 12:36:45.772 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.772975 | orchestrator | 12:36:45.772 STDOUT terraform:  + protocol = "112" 2025-03-23 12:36:45.773007 | orchestrator | 12:36:45.772 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.773037 | orchestrator | 12:36:45.773 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-23 12:36:45.773069 | orchestrator | 12:36:45.773 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-23 12:36:45.773097 | orchestrator | 12:36:45.773 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-23 12:36:45.773121 | orchestrator | 12:36:45.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.773129 | orchestrator | 12:36:45.773 STDOUT terraform:  } 2025-03-23 12:36:45.773180 | orchestrator | 12:36:45.773 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-23 12:36:45.773252 | orchestrator | 12:36:45.773 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-23 12:36:45.773281 | orchestrator | 12:36:45.773 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.773314 | orchestrator | 12:36:45.773 STDOUT terraform:  + description = "management security group" 2025-03-23 12:36:45.773344 | orchestrator | 12:36:45.773 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.773375 | orchestrator | 12:36:45.773 STDOUT terraform:  + name = "testbed-management" 2025-03-23 12:36:45.773403 | orchestrator | 12:36:45.773 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.773436 | orchestrator | 12:36:45.773 STDOUT terraform:  + stateful = (known after apply) 2025-03-23 12:36:45.773462 | orchestrator | 12:36:45.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.773470 | orchestrator | 12:36:45.773 STDOUT terraform:  } 2025-03-23 12:36:45.773526 | orchestrator | 12:36:45.773 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-23 12:36:45.773572 | orchestrator | 12:36:45.773 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-23 12:36:45.773599 | orchestrator | 12:36:45.773 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.773631 | orchestrator | 12:36:45.773 STDOUT terraform:  + description = "node security group" 2025-03-23 12:36:45.773659 | orchestrator | 12:36:45.773 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.773683 | orchestrator | 12:36:45.773 STDOUT terraform:  + name = "testbed-node" 2025-03-23 12:36:45.773721 | orchestrator | 12:36:45.773 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.773736 | orchestrator | 12:36:45.773 STDOUT terraform:  + stateful = (known after apply) 2025-03-23 12:36:45.773766 | orchestrator | 12:36:45.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.773774 | orchestrator | 12:36:45.773 STDOUT terraform:  } 2025-03-23 12:36:45.773828 | orchestrator | 12:36:45.773 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-23 12:36:45.773864 | orchestrator | 12:36:45.773 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-23 12:36:45.773915 | orchestrator | 12:36:45.773 STDOUT terraform:  + all_tags = (known after apply) 2025-03-23 12:36:45.773936 | orchestrator | 12:36:45.773 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-23 12:36:45.773957 | orchestrator | 12:36:45.773 STDOUT terraform:  + dns_nameservers = [ 2025-03-23 12:36:45.773974 | orchestrator | 12:36:45.773 STDOUT terraform:  + "8.8.8.8", 2025-03-23 12:36:45.774000 | orchestrator | 12:36:45.773 STDOUT terraform:  + "9.9.9.9", 2025-03-23 12:36:45.774007 | orchestrator | 12:36:45.773 STDOUT terraform:  ] 2025-03-23 12:36:45.774038 | orchestrator | 12:36:45.773 STDOUT terraform:  + enable_dhcp = true 2025-03-23 12:36:45.774064 | orchestrator | 12:36:45.774 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-23 12:36:45.774093 | orchestrator | 12:36:45.774 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.774112 | orchestrator | 12:36:45.774 STDOUT terraform:  + ip_version = 4 2025-03-23 12:36:45.774142 | orchestrator | 12:36:45.774 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-23 12:36:45.774175 | orchestrator | 12:36:45.774 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-23 12:36:45.774247 | orchestrator | 12:36:45.774 STDOUT terraform:  + name = "subnet-testbed-management" 2025-03-23 12:36:45.774255 | orchestrator | 12:36:45.774 STDOUT terraform:  + network_id = (known after apply) 2025-03-23 12:36:45.774261 | orchestrator | 12:36:45.774 STDOUT terraform:  + no_gateway = false 2025-03-23 12:36:45.774284 | orchestrator | 12:36:45.774 STDOUT terraform:  + region = (known after apply) 2025-03-23 12:36:45.774313 | orchestrator | 12:36:45.774 STDOUT terraform:  + service_types = (known after apply) 2025-03-23 12:36:45.774354 | orchestrator | 12:36:45.774 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-23 12:36:45.774362 | orchestrator | 12:36:45.774 STDOUT terraform:  + allocation_pool { 2025-03-23 12:36:45.774384 | orchestrator | 12:36:45.774 STDOUT terraform:  + end = "192.168.31.250" 2025-03-23 12:36:45.774408 | orchestrator | 12:36:45.774 STDOUT terraform:  + start = "192.168.31.200" 2025-03-23 12:36:45.774416 | orchestrator | 12:36:45.774 STDOUT terraform:  } 2025-03-23 12:36:45.774432 | orchestrator | 12:36:45.774 STDOUT terraform:  } 2025-03-23 12:36:45.774460 | orchestrator | 12:36:45.774 STDOUT terraform:  # terraform_data.image will be created 2025-03-23 12:36:45.774484 | orchestrator | 12:36:45.774 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-23 12:36:45.774509 | orchestrator | 12:36:45.774 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.774548 | orchestrator | 12:36:45.774 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-23 12:36:45.774555 | orchestrator | 12:36:45.774 STDOUT terraform:  + output = (known after apply) 2025-03-23 12:36:45.774562 | orchestrator | 12:36:45.774 STDOUT terraform:  } 2025-03-23 12:36:45.774595 | orchestrator | 12:36:45.774 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-23 12:36:45.774622 | orchestrator | 12:36:45.774 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-23 12:36:45.774654 | orchestrator | 12:36:45.774 STDOUT terraform:  + id = (known after apply) 2025-03-23 12:36:45.774662 | orchestrator | 12:36:45.774 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-23 12:36:45.774688 | orchestrator | 12:36:45.774 STDOUT terraform:  + output = (known after apply) 2025-03-23 12:36:45.774696 | orchestrator | 12:36:45.774 STDOUT terraform:  } 2025-03-23 12:36:45.774727 | orchestrator | 12:36:45.774 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-23 12:36:45.774735 | orchestrator | 12:36:45.774 STDOUT terraform: Changes to Outputs: 2025-03-23 12:36:45.774763 | orchestrator | 12:36:45.774 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-23 12:36:45.774788 | orchestrator | 12:36:45.774 STDOUT terraform:  + private_key = (sensitive value) 2025-03-23 12:36:45.926675 | orchestrator | 12:36:45.926 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-23 12:36:45.931053 | orchestrator | 12:36:45.926 STDOUT terraform: terraform_data.image: Creating... 2025-03-23 12:36:45.931146 | orchestrator | 12:36:45.926 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=ba7b29ca-b448-858e-6fce-18367eec359c] 2025-03-23 12:36:45.931168 | orchestrator | 12:36:45.926 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=af35b556-05cd-2797-12c4-16d46f4a384c] 2025-03-23 12:36:45.931195 | orchestrator | 12:36:45.930 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-23 12:36:45.931613 | orchestrator | 12:36:45.931 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-23 12:36:45.937969 | orchestrator | 12:36:45.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-23 12:36:45.938271 | orchestrator | 12:36:45.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-23 12:36:45.938836 | orchestrator | 12:36:45.938 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-23 12:36:45.939445 | orchestrator | 12:36:45.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-23 12:36:45.942525 | orchestrator | 12:36:45.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-23 12:36:45.942589 | orchestrator | 12:36:45.942 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-23 12:36:45.943587 | orchestrator | 12:36:45.943 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-23 12:36:45.949792 | orchestrator | 12:36:45.949 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-23 12:36:46.372662 | orchestrator | 12:36:46.372 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-23 12:36:46.380950 | orchestrator | 12:36:46.380 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-23 12:36:46.390457 | orchestrator | 12:36:46.390 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-23 12:36:46.401210 | orchestrator | 12:36:46.401 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-23 12:36:46.620249 | orchestrator | 12:36:46.619 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-03-23 12:36:46.626509 | orchestrator | 12:36:46.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-23 12:36:51.853250 | orchestrator | 12:36:51.852 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=68798a12-3492-4cdb-8436-171ca55fdc1b] 2025-03-23 12:36:51.860584 | orchestrator | 12:36:51.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-23 12:36:55.939573 | orchestrator | 12:36:55.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-23 12:36:55.939816 | orchestrator | 12:36:55.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-23 12:36:55.940078 | orchestrator | 12:36:55.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-23 12:36:55.940314 | orchestrator | 12:36:55.940 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-23 12:36:55.944812 | orchestrator | 12:36:55.944 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-23 12:36:55.950991 | orchestrator | 12:36:55.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-23 12:36:56.381782 | orchestrator | 12:36:56.381 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-23 12:36:56.402367 | orchestrator | 12:36:56.402 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-23 12:36:56.527318 | orchestrator | 12:36:56.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=8fbee761-a5d2-4623-bf19-8989346ac6dd] 2025-03-23 12:36:56.533645 | orchestrator | 12:36:56.533 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-23 12:36:56.548831 | orchestrator | 12:36:56.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=94c1a16d-83bf-4bb0-903c-e73c9d50c029] 2025-03-23 12:36:56.553993 | orchestrator | 12:36:56.553 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-23 12:36:56.566723 | orchestrator | 12:36:56.566 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=9e74186d-4472-4929-8a20-9843938772e5] 2025-03-23 12:36:56.572832 | orchestrator | 12:36:56.572 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-23 12:36:56.580773 | orchestrator | 12:36:56.580 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=de771d17-f0b5-4049-b48c-6cbd0f44ea02] 2025-03-23 12:36:56.588063 | orchestrator | 12:36:56.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-23 12:36:56.592043 | orchestrator | 12:36:56.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=6d22c86c-8b28-4de1-9381-02b0bcd9097d] 2025-03-23 12:36:56.598888 | orchestrator | 12:36:56.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-23 12:36:56.612350 | orchestrator | 12:36:56.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=ea8f35dc-a35e-4089-a77c-db984e90bcf5] 2025-03-23 12:36:56.623033 | orchestrator | 12:36:56.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-23 12:36:56.627285 | orchestrator | 12:36:56.627 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-23 12:36:56.655887 | orchestrator | 12:36:56.655 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=cb506f9f-b661-4909-8304-0e1e52bc58d9] 2025-03-23 12:36:56.664744 | orchestrator | 12:36:56.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-23 12:36:56.667024 | orchestrator | 12:36:56.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=006f5652-5a72-4e84-ab95-2470543ee754] 2025-03-23 12:36:56.672063 | orchestrator | 12:36:56.671 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-23 12:36:56.802578 | orchestrator | 12:36:56.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=be2f0237-b6d9-43f0-8b5b-dde81dc603fc] 2025-03-23 12:36:56.813885 | orchestrator | 12:36:56.813 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-23 12:37:01.861288 | orchestrator | 12:37:01.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-23 12:37:02.043336 | orchestrator | 12:37:02.042 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=3a31d3ab-ce8e-4019-949d-50084d47806b] 2025-03-23 12:37:02.051462 | orchestrator | 12:37:02.051 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-23 12:37:06.534410 | orchestrator | 12:37:06.534 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-23 12:37:06.554551 | orchestrator | 12:37:06.554 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-23 12:37:06.573847 | orchestrator | 12:37:06.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-23 12:37:06.588859 | orchestrator | 12:37:06.588 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-23 12:37:06.600259 | orchestrator | 12:37:06.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-23 12:37:06.623292 | orchestrator | 12:37:06.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-23 12:37:06.666771 | orchestrator | 12:37:06.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-23 12:37:06.673011 | orchestrator | 12:37:06.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-23 12:37:06.725717 | orchestrator | 12:37:06.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=4783091f-b49d-4bb1-a12d-8bcd2b1f8992] 2025-03-23 12:37:06.745769 | orchestrator | 12:37:06.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-23 12:37:06.747406 | orchestrator | 12:37:06.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=87cd8e5e-a11a-492b-892b-8d669e416dd6] 2025-03-23 12:37:06.756999 | orchestrator | 12:37:06.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-23 12:37:06.792006 | orchestrator | 12:37:06.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=fc6f372a-e295-454e-89b3-dab7283bde6d] 2025-03-23 12:37:06.799014 | orchestrator | 12:37:06.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-23 12:37:06.814822 | orchestrator | 12:37:06.814 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-23 12:37:06.821392 | orchestrator | 12:37:06.821 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=1784947b-28d0-43a3-b38e-e79d14638f2f] 2025-03-23 12:37:06.829290 | orchestrator | 12:37:06.829 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-23 12:37:06.847301 | orchestrator | 12:37:06.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=93f5d818-3956-4b7b-8e36-fec820f5f0d8] 2025-03-23 12:37:06.852961 | orchestrator | 12:37:06.852 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-23 12:37:06.854090 | orchestrator | 12:37:06.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=98af1d1a-144c-4faa-87cf-25faeb3fb806] 2025-03-23 12:37:06.868721 | orchestrator | 12:37:06.868 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-23 12:37:06.872843 | orchestrator | 12:37:06.872 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=0b1f1101367819be463d4b83a3b5556da1bf8aa1] 2025-03-23 12:37:06.877979 | orchestrator | 12:37:06.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=a3062c13-e6b5-492c-acee-00491b2788e1] 2025-03-23 12:37:06.880232 | orchestrator | 12:37:06.880 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-23 12:37:06.884266 | orchestrator | 12:37:06.884 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-23 12:37:06.885372 | orchestrator | 12:37:06.885 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=d7b279e1b8184d4e63af0dc47cdbf8a75028f6f2] 2025-03-23 12:37:06.888412 | orchestrator | 12:37:06.888 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=2a7b2e0f-187f-479d-baca-3c89b3a54e1f] 2025-03-23 12:37:07.177966 | orchestrator | 12:37:07.177 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=736c3593-7d1d-464c-a6b9-c247d22d3bbf] 2025-03-23 12:37:12.052964 | orchestrator | 12:37:12.052 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-23 12:37:12.358858 | orchestrator | 12:37:12.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=597da941-c896-4f50-9509-26c2623e2e81] 2025-03-23 12:37:12.534963 | orchestrator | 12:37:12.534 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=08c0602e-aea3-4820-a062-a2c29786abb6] 2025-03-23 12:37:12.543809 | orchestrator | 12:37:12.543 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-23 12:37:16.746254 | orchestrator | 12:37:16.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-23 12:37:16.758409 | orchestrator | 12:37:16.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-23 12:37:16.799444 | orchestrator | 12:37:16.799 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-23 12:37:16.830275 | orchestrator | 12:37:16.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-23 12:37:16.855532 | orchestrator | 12:37:16.855 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-23 12:37:17.094729 | orchestrator | 12:37:17.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=4f747510-de15-45d7-9b81-ad0561662c74] 2025-03-23 12:37:17.117175 | orchestrator | 12:37:17.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=62c3c2b3-a12b-436d-ac05-ef0f7338b1b5] 2025-03-23 12:37:17.152949 | orchestrator | 12:37:17.152 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=262a0bcf-e399-49b7-b2ad-e29e608abf48] 2025-03-23 12:37:17.202714 | orchestrator | 12:37:17.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=b604898d-e561-495d-9e07-a30ca177e2ec] 2025-03-23 12:37:17.220696 | orchestrator | 12:37:17.220 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=43d9102f-90c9-419e-a677-363110a73750] 2025-03-23 12:37:19.188578 | orchestrator | 12:37:19.188 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=4903df05-93d7-4fdb-88a0-b0ee34ce1a08] 2025-03-23 12:37:19.195398 | orchestrator | 12:37:19.195 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-23 12:37:19.196996 | orchestrator | 12:37:19.196 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-23 12:37:19.197065 | orchestrator | 12:37:19.196 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-23 12:37:19.334783 | orchestrator | 12:37:19.334 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f89e160f-8329-4c3b-82ed-d30fb1d7598f] 2025-03-23 12:37:19.349794 | orchestrator | 12:37:19.349 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-23 12:37:19.352856 | orchestrator | 12:37:19.352 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-23 12:37:19.353550 | orchestrator | 12:37:19.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-23 12:37:19.356112 | orchestrator | 12:37:19.355 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-23 12:37:19.357347 | orchestrator | 12:37:19.357 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-23 12:37:19.357630 | orchestrator | 12:37:19.357 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-23 12:37:19.358744 | orchestrator | 12:37:19.358 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-23 12:37:19.358935 | orchestrator | 12:37:19.358 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-23 12:37:19.462362 | orchestrator | 12:37:19.462 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=713a7e1f-e994-46a5-8a2f-ced7bca707b9] 2025-03-23 12:37:19.475155 | orchestrator | 12:37:19.474 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-23 12:37:19.491367 | orchestrator | 12:37:19.491 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=18c382a9-c50d-4dbd-b128-5a926c5ef421] 2025-03-23 12:37:19.501264 | orchestrator | 12:37:19.501 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-23 12:37:19.567789 | orchestrator | 12:37:19.567 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=481ddb57-6004-4db8-b7e8-7edd3e99832d] 2025-03-23 12:37:19.578470 | orchestrator | 12:37:19.578 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-23 12:37:19.691455 | orchestrator | 12:37:19.691 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=6372f56b-c388-4f94-858c-83bc50473de4] 2025-03-23 12:37:19.697753 | orchestrator | 12:37:19.697 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-23 12:37:19.764200 | orchestrator | 12:37:19.763 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=6d4f6650-c2f1-441f-8c89-269331a9f64a] 2025-03-23 12:37:19.769252 | orchestrator | 12:37:19.769 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-23 12:37:19.824742 | orchestrator | 12:37:19.824 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=2df62cc5-a4e2-4cca-ae65-7b8913fa7015] 2025-03-23 12:37:19.833343 | orchestrator | 12:37:19.833 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-23 12:37:19.903258 | orchestrator | 12:37:19.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=ae46ec7f-b314-409e-8abd-a960cfb04f3d] 2025-03-23 12:37:19.911152 | orchestrator | 12:37:19.910 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-23 12:37:20.017713 | orchestrator | 12:37:20.017 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=52d89c85-899a-456b-bb57-7b5a78d927e1] 2025-03-23 12:37:20.030861 | orchestrator | 12:37:20.030 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-23 12:37:20.298639 | orchestrator | 12:37:20.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d24817bf-61c8-498b-886f-3334bc851163] 2025-03-23 12:37:20.428808 | orchestrator | 12:37:20.428 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=272cb0dc-3701-4b89-af37-1f501fa8ce95] 2025-03-23 12:37:25.043005 | orchestrator | 12:37:25.042 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=d09537ff-1421-4fec-ac8d-4eccdc275ee4] 2025-03-23 12:37:25.066519 | orchestrator | 12:37:25.066 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=30841d5b-2a1a-4f86-ab02-1f94db8ee16d] 2025-03-23 12:37:25.267524 | orchestrator | 12:37:25.267 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=2a530607-e855-4437-a4e9-94f3c055e3af] 2025-03-23 12:37:25.374151 | orchestrator | 12:37:25.373 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=421d5d32-ef26-42da-b577-d62ed9d2a1a3] 2025-03-23 12:37:25.528110 | orchestrator | 12:37:25.527 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 7s [id=e7ec1586-c8e5-4d80-b3f0-1bab50b5212a] 2025-03-23 12:37:25.961636 | orchestrator | 12:37:25.961 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=45855e77-56fd-4c85-b40f-18f32a050aef] 2025-03-23 12:37:25.978591 | orchestrator | 12:37:25.978 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=39dd17a0-4c09-48d2-8837-475c0d77c1b5] 2025-03-23 12:37:26.551371 | orchestrator | 12:37:26.551 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=034943b5-d8d1-4c81-9a6c-2db21a5882cc] 2025-03-23 12:37:26.576974 | orchestrator | 12:37:26.576 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-23 12:37:26.580596 | orchestrator | 12:37:26.580 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-23 12:37:26.592207 | orchestrator | 12:37:26.592 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-23 12:37:26.597817 | orchestrator | 12:37:26.597 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-23 12:37:26.599528 | orchestrator | 12:37:26.599 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-23 12:37:26.602594 | orchestrator | 12:37:26.602 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-23 12:37:26.608276 | orchestrator | 12:37:26.608 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-23 12:37:33.272280 | orchestrator | 12:37:33.271 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=1593d3a9-f67f-4cf1-bb81-8930ccc11cec] 2025-03-23 12:37:33.294177 | orchestrator | 12:37:33.293 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-23 12:37:33.294463 | orchestrator | 12:37:33.294 STDOUT terraform: local_file.inventory: Creating... 2025-03-23 12:37:33.295648 | orchestrator | 12:37:33.295 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-23 12:37:33.298002 | orchestrator | 12:37:33.297 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1a0c60b7a9db9af8ef97a69693884b1f23d421eb] 2025-03-23 12:37:33.301354 | orchestrator | 12:37:33.301 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5a7b0360a3580fa802bafc14f5b6b1738e91b890] 2025-03-23 12:37:33.884663 | orchestrator | 12:37:33.884 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=1593d3a9-f67f-4cf1-bb81-8930ccc11cec] 2025-03-23 12:37:36.584811 | orchestrator | 12:37:36.584 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-23 12:37:36.593972 | orchestrator | 12:37:36.593 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-23 12:37:36.602222 | orchestrator | 12:37:36.602 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-23 12:37:36.603535 | orchestrator | 12:37:36.603 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-23 12:37:36.612671 | orchestrator | 12:37:36.603 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-23 12:37:36.612717 | orchestrator | 12:37:36.612 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-23 12:37:46.585026 | orchestrator | 12:37:46.584 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-23 12:37:46.595115 | orchestrator | 12:37:46.594 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-23 12:37:46.603324 | orchestrator | 12:37:46.603 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-23 12:37:46.604409 | orchestrator | 12:37:46.604 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-23 12:37:46.604602 | orchestrator | 12:37:46.604 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-23 12:37:46.613806 | orchestrator | 12:37:46.613 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-23 12:37:47.084830 | orchestrator | 12:37:47.084 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=a1700838-6954-4f90-a1f2-5af85710e8fe] 2025-03-23 12:37:47.146671 | orchestrator | 12:37:47.146 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=f245092f-58d9-451e-9491-f619b9ef6ed7] 2025-03-23 12:37:47.212593 | orchestrator | 12:37:47.212 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=94a6dd73-a0b3-45f5-91a7-f5c08a4e3b40] 2025-03-23 12:37:56.586768 | orchestrator | 12:37:56.586 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-03-23 12:37:56.605074 | orchestrator | 12:37:56.604 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-03-23 12:37:56.614260 | orchestrator | 12:37:56.614 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-03-23 12:37:57.168575 | orchestrator | 12:37:57.168 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=a24ecfdf-f7b6-4c3c-8dcc-31a459b5f107] 2025-03-23 12:37:57.848864 | orchestrator | 12:37:57.848 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=ed86205f-8809-48a5-95a6-ad280552c233] 2025-03-23 12:38:06.587827 | orchestrator | 12:38:06.587 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-03-23 12:38:07.421268 | orchestrator | 12:38:07.420 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=f3bc52a4-3313-4586-a1d0-6f9f867beb5c] 2025-03-23 12:38:07.439446 | orchestrator | 12:38:07.439 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-23 12:38:07.450304 | orchestrator | 12:38:07.450 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2245925340512943896] 2025-03-23 12:38:07.451848 | orchestrator | 12:38:07.451 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-23 12:38:07.452565 | orchestrator | 12:38:07.452 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-23 12:38:07.455297 | orchestrator | 12:38:07.455 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-23 12:38:07.460413 | orchestrator | 12:38:07.460 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-23 12:38:07.460447 | orchestrator | 12:38:07.460 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-23 12:38:07.464669 | orchestrator | 12:38:07.464 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-23 12:38:07.470435 | orchestrator | 12:38:07.470 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-23 12:38:07.474241 | orchestrator | 12:38:07.474 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-23 12:38:07.483607 | orchestrator | 12:38:07.483 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-23 12:38:07.487889 | orchestrator | 12:38:07.487 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-23 12:38:12.765548 | orchestrator | 12:38:12.765 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=a1700838-6954-4f90-a1f2-5af85710e8fe/9e74186d-4472-4929-8a20-9843938772e5] 2025-03-23 12:38:12.778391 | orchestrator | 12:38:12.778 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-23 12:38:12.786282 | orchestrator | 12:38:12.785 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=94a6dd73-a0b3-45f5-91a7-f5c08a4e3b40/ea8f35dc-a35e-4089-a77c-db984e90bcf5] 2025-03-23 12:38:12.802291 | orchestrator | 12:38:12.801 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=f3bc52a4-3313-4586-a1d0-6f9f867beb5c/be2f0237-b6d9-43f0-8b5b-dde81dc603fc] 2025-03-23 12:38:12.803954 | orchestrator | 12:38:12.803 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-23 12:38:12.807401 | orchestrator | 12:38:12.807 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=f245092f-58d9-451e-9491-f619b9ef6ed7/6d22c86c-8b28-4de1-9381-02b0bcd9097d] 2025-03-23 12:38:12.819156 | orchestrator | 12:38:12.818 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=a1700838-6954-4f90-a1f2-5af85710e8fe/cb506f9f-b661-4909-8304-0e1e52bc58d9] 2025-03-23 12:38:12.820789 | orchestrator | 12:38:12.820 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-23 12:38:12.820885 | orchestrator | 12:38:12.820 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-23 12:38:12.829485 | orchestrator | 12:38:12.829 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-23 12:38:12.834267 | orchestrator | 12:38:12.834 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=a24ecfdf-f7b6-4c3c-8dcc-31a459b5f107/94c1a16d-83bf-4bb0-903c-e73c9d50c029] 2025-03-23 12:38:12.838102 | orchestrator | 12:38:12.837 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=94a6dd73-a0b3-45f5-91a7-f5c08a4e3b40/8fbee761-a5d2-4623-bf19-8989346ac6dd] 2025-03-23 12:38:12.842544 | orchestrator | 12:38:12.842 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-23 12:38:12.843928 | orchestrator | 12:38:12.843 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=f245092f-58d9-451e-9491-f619b9ef6ed7/98af1d1a-144c-4faa-87cf-25faeb3fb806] 2025-03-23 12:38:12.847308 | orchestrator | 12:38:12.847 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-23 12:38:12.855541 | orchestrator | 12:38:12.855 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-23 12:38:12.873404 | orchestrator | 12:38:12.873 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=94a6dd73-a0b3-45f5-91a7-f5c08a4e3b40/87cd8e5e-a11a-492b-892b-8d669e416dd6] 2025-03-23 12:38:12.886037 | orchestrator | 12:38:12.885 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-23 12:38:12.893924 | orchestrator | 12:38:12.893 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 6s [id=a24ecfdf-f7b6-4c3c-8dcc-31a459b5f107/de771d17-f0b5-4049-b48c-6cbd0f44ea02] 2025-03-23 12:38:18.100390 | orchestrator | 12:38:18.099 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=a1700838-6954-4f90-a1f2-5af85710e8fe/fc6f372a-e295-454e-89b3-dab7283bde6d] 2025-03-23 12:38:18.124690 | orchestrator | 12:38:18.124 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=f245092f-58d9-451e-9491-f619b9ef6ed7/2a7b2e0f-187f-479d-baca-3c89b3a54e1f] 2025-03-23 12:38:18.150050 | orchestrator | 12:38:18.149 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=a24ecfdf-f7b6-4c3c-8dcc-31a459b5f107/a3062c13-e6b5-492c-acee-00491b2788e1] 2025-03-23 12:38:18.181580 | orchestrator | 12:38:18.181 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=f3bc52a4-3313-4586-a1d0-6f9f867beb5c/1784947b-28d0-43a3-b38e-e79d14638f2f] 2025-03-23 12:38:18.208408 | orchestrator | 12:38:18.208 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=ed86205f-8809-48a5-95a6-ad280552c233/4783091f-b49d-4bb1-a12d-8bcd2b1f8992] 2025-03-23 12:38:18.222568 | orchestrator | 12:38:18.222 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=f3bc52a4-3313-4586-a1d0-6f9f867beb5c/93f5d818-3956-4b7b-8e36-fec820f5f0d8] 2025-03-23 12:38:18.251373 | orchestrator | 12:38:18.251 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=ed86205f-8809-48a5-95a6-ad280552c233/3a31d3ab-ce8e-4019-949d-50084d47806b] 2025-03-23 12:38:18.290221 | orchestrator | 12:38:18.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=ed86205f-8809-48a5-95a6-ad280552c233/006f5652-5a72-4e84-ab95-2470543ee754] 2025-03-23 12:38:22.887436 | orchestrator | 12:38:22.886 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-23 12:38:32.890298 | orchestrator | 12:38:32.890 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-23 12:38:33.476556 | orchestrator | 12:38:33.476 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=71344cea-7b39-4864-a660-a8f52915eec8] 2025-03-23 12:38:33.490376 | orchestrator | 12:38:33.490 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-23 12:38:33.497400 | orchestrator | 12:38:33.490 STDOUT terraform: Outputs: 2025-03-23 12:38:33.497602 | orchestrator | 12:38:33.490 STDOUT terraform: manager_address = 2025-03-23 12:38:33.497621 | orchestrator | 12:38:33.490 STDOUT terraform: private_key = 2025-03-23 12:38:43.601559 | orchestrator | changed 2025-03-23 12:38:43.634861 | 2025-03-23 12:38:43.634968 | TASK [Fetch manager address] 2025-03-23 12:38:43.988978 | orchestrator | ok 2025-03-23 12:38:43.999737 | 2025-03-23 12:38:44.000441 | TASK [Set manager_host address] 2025-03-23 12:38:44.145026 | orchestrator | ok 2025-03-23 12:38:44.156625 | 2025-03-23 12:38:44.156722 | LOOP [Update ansible collections] 2025-03-23 12:38:44.956202 | orchestrator | changed 2025-03-23 12:38:45.600738 | orchestrator | changed 2025-03-23 12:38:45.618989 | 2025-03-23 12:38:45.619096 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-23 12:38:56.111363 | orchestrator | ok 2025-03-23 12:38:56.123012 | 2025-03-23 12:38:56.123103 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-23 12:39:56.149540 | orchestrator | ok 2025-03-23 12:39:56.158086 | 2025-03-23 12:39:56.158174 | TASK [Fetch manager ssh hostkey] 2025-03-23 12:39:57.199048 | orchestrator | Output suppressed because no_log was given 2025-03-23 12:39:57.216956 | 2025-03-23 12:39:57.217087 | TASK [Get ssh keypair from terraform environment] 2025-03-23 12:39:57.781284 | orchestrator | changed 2025-03-23 12:39:57.799404 | 2025-03-23 12:39:57.799556 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-23 12:39:57.850628 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-23 12:39:57.861975 | 2025-03-23 12:39:57.862091 | TASK [Run manager part 0] 2025-03-23 12:39:58.619882 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-23 12:39:58.655786 | orchestrator | 2025-03-23 12:40:00.811113 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-23 12:40:00.811149 | orchestrator | 2025-03-23 12:40:00.811164 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-23 12:40:00.811177 | orchestrator | ok: [testbed-manager] 2025-03-23 12:40:02.621566 | orchestrator | 2025-03-23 12:40:02.621620 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-23 12:40:02.621631 | orchestrator | 2025-03-23 12:40:02.621638 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:40:02.621649 | orchestrator | ok: [testbed-manager] 2025-03-23 12:40:03.769509 | orchestrator | 2025-03-23 12:40:03.769546 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-23 12:40:03.769558 | orchestrator | ok: [testbed-manager] 2025-03-23 12:40:03.818166 | orchestrator | 2025-03-23 12:40:03.818183 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-23 12:40:03.818191 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:03.854436 | orchestrator | 2025-03-23 12:40:03.854452 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-23 12:40:03.854459 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:03.889851 | orchestrator | 2025-03-23 12:40:03.889865 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-23 12:40:03.889872 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:03.924197 | orchestrator | 2025-03-23 12:40:03.924210 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-23 12:40:03.924217 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:03.956424 | orchestrator | 2025-03-23 12:40:03.956435 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-23 12:40:03.956442 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:03.983828 | orchestrator | 2025-03-23 12:40:03.983845 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-23 12:40:03.983852 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:04.008155 | orchestrator | 2025-03-23 12:40:04.008180 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-23 12:40:04.008188 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:40:04.825251 | orchestrator | 2025-03-23 12:40:04.825317 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-23 12:40:04.825346 | orchestrator | changed: [testbed-manager] 2025-03-23 12:43:03.773162 | orchestrator | 2025-03-23 12:43:03.773396 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-23 12:43:03.773448 | orchestrator | changed: [testbed-manager] 2025-03-23 12:44:24.790668 | orchestrator | 2025-03-23 12:44:24.790779 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-23 12:44:24.790815 | orchestrator | changed: [testbed-manager] 2025-03-23 12:44:48.116962 | orchestrator | 2025-03-23 12:44:48.117057 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-23 12:44:48.117090 | orchestrator | changed: [testbed-manager] 2025-03-23 12:44:58.559054 | orchestrator | 2025-03-23 12:44:58.559120 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-23 12:44:58.559150 | orchestrator | changed: [testbed-manager] 2025-03-23 12:44:58.606590 | orchestrator | 2025-03-23 12:44:58.606639 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-23 12:44:58.606663 | orchestrator | ok: [testbed-manager] 2025-03-23 12:44:59.432833 | orchestrator | 2025-03-23 12:44:59.432884 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-23 12:44:59.432904 | orchestrator | ok: [testbed-manager] 2025-03-23 12:45:00.139570 | orchestrator | 2025-03-23 12:45:00.139611 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-23 12:45:00.139627 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:07.434124 | orchestrator | 2025-03-23 12:45:07.434242 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-23 12:45:07.434280 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:14.085687 | orchestrator | 2025-03-23 12:45:14.085819 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-23 12:45:14.085886 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:16.960365 | orchestrator | 2025-03-23 12:45:16.960463 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-23 12:45:16.960500 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:18.875444 | orchestrator | 2025-03-23 12:45:18.875540 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-23 12:45:18.875572 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:20.192266 | orchestrator | 2025-03-23 12:45:20.192347 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-23 12:45:20.192372 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-23 12:45:20.232082 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-23 12:45:20.232202 | orchestrator | 2025-03-23 12:45:20.232228 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-23 12:45:20.232260 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-23 12:45:23.433971 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-23 12:45:23.434114 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-23 12:45:23.434135 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-23 12:45:23.434190 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-23 12:45:24.021195 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-23 12:45:24.021283 | orchestrator | 2025-03-23 12:45:24.021298 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-23 12:45:24.021323 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:46.066127 | orchestrator | 2025-03-23 12:45:46.066228 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-23 12:45:46.066246 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-23 12:45:48.224463 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-23 12:45:48.224548 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-23 12:45:48.224566 | orchestrator | 2025-03-23 12:45:48.224585 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-23 12:45:48.224614 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-23 12:45:49.600468 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-23 12:45:49.600545 | orchestrator | 2025-03-23 12:45:49.600560 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-23 12:45:49.600573 | orchestrator | 2025-03-23 12:45:49.600586 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:45:49.600611 | orchestrator | ok: [testbed-manager] 2025-03-23 12:45:49.645504 | orchestrator | 2025-03-23 12:45:49.645559 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-23 12:45:49.645576 | orchestrator | ok: [testbed-manager] 2025-03-23 12:45:49.705791 | orchestrator | 2025-03-23 12:45:49.705846 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-23 12:45:49.705861 | orchestrator | ok: [testbed-manager] 2025-03-23 12:45:50.441186 | orchestrator | 2025-03-23 12:45:50.441279 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-23 12:45:50.441311 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:51.247819 | orchestrator | 2025-03-23 12:45:51.247874 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-23 12:45:51.247889 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:52.659742 | orchestrator | 2025-03-23 12:45:52.660130 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-23 12:45:52.660273 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-23 12:45:54.135675 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-23 12:45:54.135734 | orchestrator | 2025-03-23 12:45:54.135752 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-23 12:45:54.135775 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:56.004600 | orchestrator | 2025-03-23 12:45:56.004677 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-23 12:45:56.004707 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 12:45:56.603656 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-23 12:45:56.603757 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-23 12:45:56.603777 | orchestrator | 2025-03-23 12:45:56.603793 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-23 12:45:56.603820 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:56.664678 | orchestrator | 2025-03-23 12:45:56.664762 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-23 12:45:56.664795 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:45:57.533407 | orchestrator | 2025-03-23 12:45:57.533495 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-23 12:45:57.533524 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:45:57.569298 | orchestrator | changed: [testbed-manager] 2025-03-23 12:45:57.569374 | orchestrator | 2025-03-23 12:45:57.569391 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-23 12:45:57.569418 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:45:57.598988 | orchestrator | 2025-03-23 12:45:57.599067 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-23 12:45:57.599095 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:45:57.633004 | orchestrator | 2025-03-23 12:45:57.633066 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-23 12:45:57.633085 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:45:57.677431 | orchestrator | 2025-03-23 12:45:57.678248 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-23 12:45:57.678311 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:45:58.537022 | orchestrator | 2025-03-23 12:45:58.537107 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-23 12:45:58.537164 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:00.075828 | orchestrator | 2025-03-23 12:46:00.075896 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-23 12:46:00.075914 | orchestrator | 2025-03-23 12:46:00.075929 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:46:00.075955 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:01.118287 | orchestrator | 2025-03-23 12:46:01.118327 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-23 12:46:01.118341 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:01.200708 | orchestrator | 2025-03-23 12:46:01.200844 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 12:46:01.200854 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-23 12:46:01.200860 | orchestrator | 2025-03-23 12:46:01.594220 | orchestrator | changed 2025-03-23 12:46:01.613925 | 2025-03-23 12:46:01.614059 | TASK [Point out that the log in on the manager is now possible] 2025-03-23 12:46:01.652321 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-23 12:46:01.660989 | 2025-03-23 12:46:01.661094 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-23 12:46:01.695142 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-23 12:46:01.703869 | 2025-03-23 12:46:01.703980 | TASK [Run manager part 1 + 2] 2025-03-23 12:46:02.475912 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-23 12:46:02.519976 | orchestrator | 2025-03-23 12:46:05.029675 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-23 12:46:05.029786 | orchestrator | 2025-03-23 12:46:05.029844 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:46:05.029882 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:05.064553 | orchestrator | 2025-03-23 12:46:05.064631 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-23 12:46:05.064672 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:46:05.099956 | orchestrator | 2025-03-23 12:46:05.100013 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-23 12:46:05.100042 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:05.138863 | orchestrator | 2025-03-23 12:46:05.138921 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-23 12:46:05.138951 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:05.200685 | orchestrator | 2025-03-23 12:46:05.200733 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-23 12:46:05.200751 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:05.252162 | orchestrator | 2025-03-23 12:46:05.252220 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-23 12:46:05.252332 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:05.286394 | orchestrator | 2025-03-23 12:46:05.286445 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-23 12:46:05.286468 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-23 12:46:06.002191 | orchestrator | 2025-03-23 12:46:06.002259 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-23 12:46:06.002294 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:06.038374 | orchestrator | 2025-03-23 12:46:06.038439 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-23 12:46:06.038467 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:46:07.362049 | orchestrator | 2025-03-23 12:46:07.362148 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-23 12:46:07.362192 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:07.975309 | orchestrator | 2025-03-23 12:46:07.975396 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-23 12:46:07.975428 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:09.132544 | orchestrator | 2025-03-23 12:46:09.132694 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-23 12:46:09.132722 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:22.816959 | orchestrator | 2025-03-23 12:46:22.817003 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-23 12:46:22.817018 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:23.494681 | orchestrator | 2025-03-23 12:46:23.494728 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-23 12:46:23.494746 | orchestrator | ok: [testbed-manager] 2025-03-23 12:46:23.548385 | orchestrator | 2025-03-23 12:46:23.548427 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-23 12:46:23.548443 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:46:24.541728 | orchestrator | 2025-03-23 12:46:24.541820 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-23 12:46:24.541854 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:25.586587 | orchestrator | 2025-03-23 12:46:25.586639 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-23 12:46:25.586657 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:26.201819 | orchestrator | 2025-03-23 12:46:26.201878 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-23 12:46:26.201894 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:26.238403 | orchestrator | 2025-03-23 12:46:26.238444 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-23 12:46:26.238457 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-23 12:46:28.574085 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-23 12:46:28.574177 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-23 12:46:28.574190 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-23 12:46:28.574210 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:38.841454 | orchestrator | 2025-03-23 12:46:38.841602 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-23 12:46:38.841628 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-23 12:46:39.993127 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-23 12:46:39.993217 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-23 12:46:39.993234 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-23 12:46:39.993250 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-23 12:46:39.993263 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-23 12:46:39.993275 | orchestrator | 2025-03-23 12:46:39.993288 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-23 12:46:39.993327 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:40.033302 | orchestrator | 2025-03-23 12:46:40.033402 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-23 12:46:40.033437 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:46:43.397559 | orchestrator | 2025-03-23 12:46:43.397659 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-23 12:46:43.397693 | orchestrator | changed: [testbed-manager] 2025-03-23 12:46:43.440096 | orchestrator | 2025-03-23 12:46:43.440195 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-23 12:46:43.440214 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:48:27.341371 | orchestrator | 2025-03-23 12:48:27.341480 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-23 12:48:27.341516 | orchestrator | changed: [testbed-manager] 2025-03-23 12:48:28.570361 | orchestrator | 2025-03-23 12:48:28.570465 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-23 12:48:28.570499 | orchestrator | ok: [testbed-manager] 2025-03-23 12:48:28.650124 | orchestrator | 2025-03-23 12:48:28.650288 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 12:48:28.650312 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-23 12:48:28.650327 | orchestrator | 2025-03-23 12:48:28.838892 | orchestrator | changed 2025-03-23 12:48:28.858484 | 2025-03-23 12:48:28.858625 | TASK [Reboot manager] 2025-03-23 12:48:30.399138 | orchestrator | changed 2025-03-23 12:48:30.416661 | 2025-03-23 12:48:30.416795 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-23 12:48:46.845739 | orchestrator | ok 2025-03-23 12:48:46.857534 | 2025-03-23 12:48:46.857714 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-23 12:49:46.910760 | orchestrator | ok 2025-03-23 12:49:46.922115 | 2025-03-23 12:49:46.922240 | TASK [Deploy manager + bootstrap nodes] 2025-03-23 12:49:49.631369 | orchestrator | 2025-03-23 12:49:49.635835 | orchestrator | # DEPLOY MANAGER 2025-03-23 12:49:49.635857 | orchestrator | 2025-03-23 12:49:49.635866 | orchestrator | + set -e 2025-03-23 12:49:49.635888 | orchestrator | + echo 2025-03-23 12:49:49.635897 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-23 12:49:49.635905 | orchestrator | + echo 2025-03-23 12:49:49.635916 | orchestrator | + cat /opt/manager-vars.sh 2025-03-23 12:49:49.635934 | orchestrator | export NUMBER_OF_NODES=6 2025-03-23 12:49:49.636013 | orchestrator | 2025-03-23 12:49:49.636022 | orchestrator | export CEPH_VERSION=quincy 2025-03-23 12:49:49.636028 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-23 12:49:49.636035 | orchestrator | export MANAGER_VERSION=8.1.0 2025-03-23 12:49:49.636040 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-23 12:49:49.636047 | orchestrator | 2025-03-23 12:49:49.636053 | orchestrator | export ARA=false 2025-03-23 12:49:49.636059 | orchestrator | export TEMPEST=false 2025-03-23 12:49:49.636065 | orchestrator | export IS_ZUUL=true 2025-03-23 12:49:49.636071 | orchestrator | 2025-03-23 12:49:49.636077 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 12:49:49.636083 | orchestrator | export EXTERNAL_API=false 2025-03-23 12:49:49.636090 | orchestrator | 2025-03-23 12:49:49.636095 | orchestrator | export IMAGE_USER=ubuntu 2025-03-23 12:49:49.636101 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-23 12:49:49.636108 | orchestrator | 2025-03-23 12:49:49.636114 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-23 12:49:49.636122 | orchestrator | 2025-03-23 12:49:49.637012 | orchestrator | + echo 2025-03-23 12:49:49.637021 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-23 12:49:49.637029 | orchestrator | ++ export INTERACTIVE=false 2025-03-23 12:49:49.637112 | orchestrator | ++ INTERACTIVE=false 2025-03-23 12:49:49.637121 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-23 12:49:49.637378 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-23 12:49:49.637387 | orchestrator | + source /opt/manager-vars.sh 2025-03-23 12:49:49.637427 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-23 12:49:49.637435 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-23 12:49:49.637443 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-23 12:49:49.637528 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-23 12:49:49.637535 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-23 12:49:49.637542 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-23 12:49:49.637552 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-23 12:49:49.637558 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-23 12:49:49.637566 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-23 12:49:49.637646 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-23 12:49:49.637653 | orchestrator | ++ export ARA=false 2025-03-23 12:49:49.637659 | orchestrator | ++ ARA=false 2025-03-23 12:49:49.637665 | orchestrator | ++ export TEMPEST=false 2025-03-23 12:49:49.637671 | orchestrator | ++ TEMPEST=false 2025-03-23 12:49:49.637677 | orchestrator | ++ export IS_ZUUL=true 2025-03-23 12:49:49.637683 | orchestrator | ++ IS_ZUUL=true 2025-03-23 12:49:49.637689 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 12:49:49.637695 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 12:49:49.637706 | orchestrator | ++ export EXTERNAL_API=false 2025-03-23 12:49:49.637713 | orchestrator | ++ EXTERNAL_API=false 2025-03-23 12:49:49.637719 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-23 12:49:49.637725 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-23 12:49:49.637732 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-23 12:49:49.637788 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-23 12:49:49.637796 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-23 12:49:49.637802 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-23 12:49:49.637810 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-23 12:49:49.701837 | orchestrator | + docker version 2025-03-23 12:49:49.971415 | orchestrator | Client: Docker Engine - Community 2025-03-23 12:49:49.975509 | orchestrator | Version: 26.1.4 2025-03-23 12:49:49.975546 | orchestrator | API version: 1.45 2025-03-23 12:49:49.975562 | orchestrator | Go version: go1.21.11 2025-03-23 12:49:49.975576 | orchestrator | Git commit: 5650f9b 2025-03-23 12:49:49.975590 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-23 12:49:49.975605 | orchestrator | OS/Arch: linux/amd64 2025-03-23 12:49:49.975619 | orchestrator | Context: default 2025-03-23 12:49:49.975633 | orchestrator | 2025-03-23 12:49:49.975648 | orchestrator | Server: Docker Engine - Community 2025-03-23 12:49:49.975663 | orchestrator | Engine: 2025-03-23 12:49:49.975676 | orchestrator | Version: 26.1.4 2025-03-23 12:49:49.975690 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-03-23 12:49:49.975704 | orchestrator | Go version: go1.21.11 2025-03-23 12:49:49.975719 | orchestrator | Git commit: de5c9cf 2025-03-23 12:49:49.975764 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-23 12:49:49.975780 | orchestrator | OS/Arch: linux/amd64 2025-03-23 12:49:49.975794 | orchestrator | Experimental: false 2025-03-23 12:49:49.975808 | orchestrator | containerd: 2025-03-23 12:49:49.975822 | orchestrator | Version: 1.7.25 2025-03-23 12:49:49.975836 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-03-23 12:49:49.975850 | orchestrator | runc: 2025-03-23 12:49:49.975863 | orchestrator | Version: 1.2.4 2025-03-23 12:49:49.975877 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-03-23 12:49:49.975892 | orchestrator | docker-init: 2025-03-23 12:49:49.975905 | orchestrator | Version: 0.19.0 2025-03-23 12:49:49.975919 | orchestrator | GitCommit: de40ad0 2025-03-23 12:49:49.975941 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-23 12:49:49.984818 | orchestrator | + set -e 2025-03-23 12:49:49.984945 | orchestrator | + source /opt/manager-vars.sh 2025-03-23 12:49:49.984997 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-23 12:49:49.985015 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-23 12:49:49.985029 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-23 12:49:49.985043 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-23 12:49:49.985057 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-23 12:49:49.985071 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-23 12:49:49.985085 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-23 12:49:49.985099 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-23 12:49:49.985113 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-23 12:49:49.985127 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-23 12:49:49.985140 | orchestrator | ++ export ARA=false 2025-03-23 12:49:49.985154 | orchestrator | ++ ARA=false 2025-03-23 12:49:49.985168 | orchestrator | ++ export TEMPEST=false 2025-03-23 12:49:49.985182 | orchestrator | ++ TEMPEST=false 2025-03-23 12:49:49.985196 | orchestrator | ++ export IS_ZUUL=true 2025-03-23 12:49:49.985210 | orchestrator | ++ IS_ZUUL=true 2025-03-23 12:49:49.985224 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 12:49:49.985238 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 12:49:49.985252 | orchestrator | ++ export EXTERNAL_API=false 2025-03-23 12:49:49.985266 | orchestrator | ++ EXTERNAL_API=false 2025-03-23 12:49:49.985280 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-23 12:49:49.985294 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-23 12:49:49.985308 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-23 12:49:49.985323 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-23 12:49:49.985341 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-23 12:49:49.992647 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-23 12:49:49.992704 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-23 12:49:49.992719 | orchestrator | ++ export INTERACTIVE=false 2025-03-23 12:49:49.992733 | orchestrator | ++ INTERACTIVE=false 2025-03-23 12:49:49.992747 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-23 12:49:49.992761 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-23 12:49:49.992774 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-23 12:49:49.992791 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-03-23 12:49:49.992815 | orchestrator | + set -e 2025-03-23 12:49:49.993259 | orchestrator | + VERSION=8.1.0 2025-03-23 12:49:50.002333 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-03-23 12:49:50.002432 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-23 12:49:50.008308 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-23 12:49:50.008343 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-23 12:49:50.011034 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-03-23 12:49:50.018311 | orchestrator | /opt/configuration ~ 2025-03-23 12:49:50.021654 | orchestrator | + set -e 2025-03-23 12:49:50.021676 | orchestrator | + pushd /opt/configuration 2025-03-23 12:49:50.021692 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-23 12:49:50.021720 | orchestrator | + source /opt/venv/bin/activate 2025-03-23 12:49:50.023070 | orchestrator | ++ deactivate nondestructive 2025-03-23 12:49:50.023381 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:50.023400 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:50.023415 | orchestrator | ++ hash -r 2025-03-23 12:49:50.023429 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:50.023443 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-23 12:49:50.023458 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-23 12:49:50.023472 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-23 12:49:50.023509 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-23 12:49:50.023673 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-23 12:49:50.023691 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-23 12:49:50.023706 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-23 12:49:50.023721 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 12:49:50.023739 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 12:49:51.404099 | orchestrator | ++ export PATH 2025-03-23 12:49:51.404221 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:51.404239 | orchestrator | ++ '[' -z '' ']' 2025-03-23 12:49:51.404254 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-23 12:49:51.404269 | orchestrator | ++ PS1='(venv) ' 2025-03-23 12:49:51.404283 | orchestrator | ++ export PS1 2025-03-23 12:49:51.404297 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-23 12:49:51.404312 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-23 12:49:51.404326 | orchestrator | ++ hash -r 2025-03-23 12:49:51.404341 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-03-23 12:49:51.404376 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-03-23 12:49:51.404954 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-03-23 12:49:51.406589 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-03-23 12:49:51.407926 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-03-23 12:49:51.409002 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-03-23 12:49:51.420775 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-03-23 12:49:51.422378 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-03-23 12:49:51.423423 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-03-23 12:49:51.424965 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-03-23 12:49:51.463404 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-03-23 12:49:51.464869 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-03-23 12:49:51.466643 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-03-23 12:49:51.467958 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-03-23 12:49:51.472206 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-03-23 12:49:51.690946 | orchestrator | ++ which gilt 2025-03-23 12:49:51.693053 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-03-23 12:49:51.940416 | orchestrator | + /opt/venv/bin/gilt overlay 2025-03-23 12:49:51.941316 | orchestrator | osism.cfg-generics: 2025-03-23 12:49:53.494106 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-03-23 12:49:53.494244 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-03-23 12:49:54.691073 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-03-23 12:49:54.691200 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-03-23 12:49:54.691221 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-03-23 12:49:54.691258 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-03-23 12:49:54.701098 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-03-23 12:49:55.238341 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-03-23 12:49:55.304281 | orchestrator | ~ 2025-03-23 12:49:55.305817 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-23 12:49:55.305845 | orchestrator | + deactivate 2025-03-23 12:49:55.305881 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-23 12:49:55.305897 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 12:49:55.305912 | orchestrator | + export PATH 2025-03-23 12:49:55.305926 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-23 12:49:55.305940 | orchestrator | + '[' -n '' ']' 2025-03-23 12:49:55.305954 | orchestrator | + hash -r 2025-03-23 12:49:55.305968 | orchestrator | + '[' -n '' ']' 2025-03-23 12:49:55.306010 | orchestrator | + unset VIRTUAL_ENV 2025-03-23 12:49:55.306072 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-23 12:49:55.306087 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-23 12:49:55.306104 | orchestrator | + unset -f deactivate 2025-03-23 12:49:55.306119 | orchestrator | + popd 2025-03-23 12:49:55.306141 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-23 12:49:55.306658 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-23 12:49:55.306685 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-23 12:49:55.360968 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-23 12:49:55.402913 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-23 12:49:55.402996 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-23 12:49:55.403026 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-23 12:49:55.403308 | orchestrator | + source /opt/venv/bin/activate 2025-03-23 12:49:55.403403 | orchestrator | ++ deactivate nondestructive 2025-03-23 12:49:55.403419 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:55.403462 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:55.403494 | orchestrator | ++ hash -r 2025-03-23 12:49:55.403508 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:55.403523 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-23 12:49:55.403537 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-23 12:49:55.403551 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-23 12:49:55.403570 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-23 12:49:55.403585 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-23 12:49:55.403599 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-23 12:49:55.403613 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-23 12:49:55.403628 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 12:49:55.403643 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 12:49:55.403660 | orchestrator | ++ export PATH 2025-03-23 12:49:55.403827 | orchestrator | ++ '[' -n '' ']' 2025-03-23 12:49:55.403850 | orchestrator | ++ '[' -z '' ']' 2025-03-23 12:49:55.404138 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-23 12:49:55.404156 | orchestrator | ++ PS1='(venv) ' 2025-03-23 12:49:55.404170 | orchestrator | ++ export PS1 2025-03-23 12:49:55.404184 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-23 12:49:55.404199 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-23 12:49:55.404215 | orchestrator | ++ hash -r 2025-03-23 12:49:55.404234 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-23 12:49:56.816708 | orchestrator | 2025-03-23 12:49:57.464393 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-23 12:49:57.464498 | orchestrator | 2025-03-23 12:49:57.464517 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-23 12:49:57.464550 | orchestrator | ok: [testbed-manager] 2025-03-23 12:49:58.531075 | orchestrator | 2025-03-23 12:49:58.531198 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-23 12:49:58.531235 | orchestrator | changed: [testbed-manager] 2025-03-23 12:50:01.109595 | orchestrator | 2025-03-23 12:50:01.109716 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-23 12:50:01.109737 | orchestrator | 2025-03-23 12:50:01.109753 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:50:01.109784 | orchestrator | ok: [testbed-manager] 2025-03-23 12:50:07.489559 | orchestrator | 2025-03-23 12:50:07.489675 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-23 12:50:07.489732 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-23 12:51:03.989085 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-03-23 12:51:03.989192 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-03-23 12:51:03.989207 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-03-23 12:51:03.989219 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-03-23 12:51:03.989229 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-03-23 12:51:03.989240 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-03-23 12:51:03.989249 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-03-23 12:51:03.989259 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-03-23 12:51:03.989275 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-03-23 12:51:03.989286 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-03-23 12:51:03.989296 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-03-23 12:51:03.989305 | orchestrator | 2025-03-23 12:51:03.989316 | orchestrator | TASK [Check status] ************************************************************ 2025-03-23 12:51:03.989339 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-23 12:51:04.053748 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-23 12:51:04.053793 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j980051343550.1570', 'results_file': '/home/dragon/.ansible_async/j980051343550.1570', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053814 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-23 12:51:04.053829 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j457365777347.1597', 'results_file': '/home/dragon/.ansible_async/j457365777347.1597', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053844 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-23 12:51:04.053856 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j220034129948.1622', 'results_file': '/home/dragon/.ansible_async/j220034129948.1622', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053874 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j535632305713.1655', 'results_file': '/home/dragon/.ansible_async/j535632305713.1655', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053889 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j911543911303.1687', 'results_file': '/home/dragon/.ansible_async/j911543911303.1687', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053901 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j519706627752.1722', 'results_file': '/home/dragon/.ansible_async/j519706627752.1722', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053912 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-23 12:51:04.053924 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j80563100698.1753', 'results_file': '/home/dragon/.ansible_async/j80563100698.1753', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.053987 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j174277555580.1786', 'results_file': '/home/dragon/.ansible_async/j174277555580.1786', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.054000 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j373851798540.1825', 'results_file': '/home/dragon/.ansible_async/j373851798540.1825', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.054012 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j581924117969.1857', 'results_file': '/home/dragon/.ansible_async/j581924117969.1857', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.054064 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j167482860687.1882', 'results_file': '/home/dragon/.ansible_async/j167482860687.1882', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.054075 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j374013533246.1916', 'results_file': '/home/dragon/.ansible_async/j374013533246.1916', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-03-23 12:51:04.054087 | orchestrator | 2025-03-23 12:51:04.054099 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-23 12:51:04.054119 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:04.599288 | orchestrator | 2025-03-23 12:51:04.599387 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-23 12:51:04.599421 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:05.013685 | orchestrator | 2025-03-23 12:51:05.013785 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-23 12:51:05.013818 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:05.414693 | orchestrator | 2025-03-23 12:51:05.414803 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-23 12:51:05.414838 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:05.461864 | orchestrator | 2025-03-23 12:51:05.461892 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-23 12:51:05.461913 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:51:05.868389 | orchestrator | 2025-03-23 12:51:05.868495 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-23 12:51:05.868529 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:06.034193 | orchestrator | 2025-03-23 12:51:06.034272 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-23 12:51:06.034302 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:51:08.380343 | orchestrator | 2025-03-23 12:51:08.380450 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-23 12:51:08.380467 | orchestrator | 2025-03-23 12:51:08.380482 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:51:08.380511 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:08.737709 | orchestrator | 2025-03-23 12:51:08.737808 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-23 12:51:08.737846 | orchestrator | 2025-03-23 12:51:08.863107 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-23 12:51:08.863214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-23 12:51:10.132567 | orchestrator | 2025-03-23 12:51:10.132661 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-23 12:51:10.132694 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-23 12:51:12.225111 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-23 12:51:12.225259 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-23 12:51:12.225280 | orchestrator | 2025-03-23 12:51:12.225296 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-23 12:51:12.225329 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-23 12:51:12.986659 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-23 12:51:12.986774 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-23 12:51:12.986793 | orchestrator | 2025-03-23 12:51:12.986809 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-23 12:51:12.986840 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:13.697538 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:13.697637 | orchestrator | 2025-03-23 12:51:13.697655 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-23 12:51:13.697686 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:13.792271 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:13.792301 | orchestrator | 2025-03-23 12:51:13.792316 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-23 12:51:13.792336 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:51:14.222509 | orchestrator | 2025-03-23 12:51:14.222581 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-23 12:51:14.222609 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:14.346751 | orchestrator | 2025-03-23 12:51:14.346785 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-23 12:51:14.346809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-23 12:51:15.535376 | orchestrator | 2025-03-23 12:51:15.535471 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-23 12:51:15.535501 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:16.486549 | orchestrator | 2025-03-23 12:51:16.486654 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-23 12:51:16.486690 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:19.841404 | orchestrator | 2025-03-23 12:51:19.841513 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-23 12:51:19.841546 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:20.150404 | orchestrator | 2025-03-23 12:51:20.150506 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-23 12:51:20.150539 | orchestrator | 2025-03-23 12:51:20.265296 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-23 12:51:20.265337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 12:51:23.139535 | orchestrator | 2025-03-23 12:51:23.139651 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-23 12:51:23.139685 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:23.292370 | orchestrator | 2025-03-23 12:51:23.292452 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-23 12:51:23.292484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-23 12:51:24.558244 | orchestrator | 2025-03-23 12:51:24.558346 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-23 12:51:24.558380 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-23 12:51:24.674803 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-23 12:51:24.674868 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-23 12:51:24.674882 | orchestrator | 2025-03-23 12:51:24.674897 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-23 12:51:24.674922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-23 12:51:25.399277 | orchestrator | 2025-03-23 12:51:25.399381 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-23 12:51:25.399415 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-23 12:51:26.115536 | orchestrator | 2025-03-23 12:51:26.115635 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-23 12:51:26.115672 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:26.576037 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:26.576141 | orchestrator | 2025-03-23 12:51:26.576158 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-23 12:51:26.576189 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:26.953992 | orchestrator | 2025-03-23 12:51:26.954171 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-23 12:51:26.954209 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:27.020605 | orchestrator | 2025-03-23 12:51:27.020685 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-23 12:51:27.020721 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:51:27.700721 | orchestrator | 2025-03-23 12:51:27.700839 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-23 12:51:27.700877 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:27.820600 | orchestrator | 2025-03-23 12:51:27.820633 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-23 12:51:27.820656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-23 12:51:28.638171 | orchestrator | 2025-03-23 12:51:28.638301 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-23 12:51:28.638963 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-23 12:51:29.357405 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-23 12:51:29.357509 | orchestrator | 2025-03-23 12:51:29.357526 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-23 12:51:29.357558 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-23 12:51:30.077813 | orchestrator | 2025-03-23 12:51:30.077969 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-23 12:51:30.078006 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:30.146793 | orchestrator | 2025-03-23 12:51:30.146837 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-23 12:51:30.146865 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:51:30.844055 | orchestrator | 2025-03-23 12:51:30.844185 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-23 12:51:30.844229 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:32.914331 | orchestrator | 2025-03-23 12:51:32.914439 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-23 12:51:32.914470 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:38.972544 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:38.972665 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:51:38.972683 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:38.972698 | orchestrator | 2025-03-23 12:51:38.972712 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-23 12:51:38.972743 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-23 12:51:39.703638 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-23 12:51:39.703708 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-23 12:51:39.703722 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-23 12:51:39.703735 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-23 12:51:39.703748 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-23 12:51:39.703761 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-23 12:51:39.703773 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-23 12:51:39.703786 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-23 12:51:39.703798 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-23 12:51:39.703810 | orchestrator | 2025-03-23 12:51:39.703823 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-23 12:51:39.703849 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-23 12:51:39.894119 | orchestrator | 2025-03-23 12:51:39.894169 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-23 12:51:39.894195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-23 12:51:40.664411 | orchestrator | 2025-03-23 12:51:40.664514 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-23 12:51:40.664545 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:41.342575 | orchestrator | 2025-03-23 12:51:41.342690 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-23 12:51:41.342724 | orchestrator | ok: [testbed-manager] 2025-03-23 12:51:42.142993 | orchestrator | 2025-03-23 12:51:42.143117 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-23 12:51:42.143152 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:46.779792 | orchestrator | 2025-03-23 12:51:46.779913 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-23 12:51:46.780027 | orchestrator | changed: [testbed-manager] 2025-03-23 12:51:47.811972 | orchestrator | 2025-03-23 12:51:47.812045 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-23 12:51:47.812075 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:10.132426 | orchestrator | 2025-03-23 12:52:10.132572 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-23 12:52:10.132611 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-23 12:52:10.228079 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:10.228152 | orchestrator | 2025-03-23 12:52:10.228170 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-23 12:52:10.228202 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:10.288030 | orchestrator | 2025-03-23 12:52:10.288070 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-23 12:52:10.288086 | orchestrator | 2025-03-23 12:52:10.288100 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-23 12:52:10.288123 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:10.380989 | orchestrator | 2025-03-23 12:52:10.381040 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-23 12:52:10.381067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-23 12:52:11.323425 | orchestrator | 2025-03-23 12:52:11.323551 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-23 12:52:11.323587 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:11.420965 | orchestrator | 2025-03-23 12:52:11.421065 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-23 12:52:11.421098 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:11.485199 | orchestrator | 2025-03-23 12:52:11.485262 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-23 12:52:11.485289 | orchestrator | ok: [testbed-manager] => { 2025-03-23 12:52:12.300379 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-23 12:52:12.300494 | orchestrator | } 2025-03-23 12:52:12.300512 | orchestrator | 2025-03-23 12:52:12.300527 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-23 12:52:12.300558 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:13.351171 | orchestrator | 2025-03-23 12:52:13.351286 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-23 12:52:13.351323 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:13.446154 | orchestrator | 2025-03-23 12:52:13.446255 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-23 12:52:13.446289 | orchestrator | ok: [testbed-manager] 2025-03-23 12:52:13.530386 | orchestrator | 2025-03-23 12:52:13.530446 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-23 12:52:13.530472 | orchestrator | ok: [testbed-manager] => { 2025-03-23 12:52:13.606339 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-23 12:52:13.606437 | orchestrator | } 2025-03-23 12:52:13.606454 | orchestrator | 2025-03-23 12:52:13.606469 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-23 12:52:13.606506 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:13.676341 | orchestrator | 2025-03-23 12:52:13.676414 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-23 12:52:13.676442 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:13.752697 | orchestrator | 2025-03-23 12:52:13.752790 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-23 12:52:13.752826 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:13.840271 | orchestrator | 2025-03-23 12:52:13.840306 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-23 12:52:13.840328 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:13.909378 | orchestrator | 2025-03-23 12:52:13.909407 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-23 12:52:13.909427 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:13.998298 | orchestrator | 2025-03-23 12:52:13.998399 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-23 12:52:13.998434 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:52:15.397051 | orchestrator | 2025-03-23 12:52:15.397164 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-23 12:52:15.397198 | orchestrator | changed: [testbed-manager] 2025-03-23 12:52:15.513241 | orchestrator | 2025-03-23 12:52:15.513278 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-23 12:52:15.513305 | orchestrator | ok: [testbed-manager] 2025-03-23 12:53:15.592069 | orchestrator | 2025-03-23 12:53:15.592207 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-23 12:53:15.592247 | orchestrator | Pausing for 60 seconds 2025-03-23 12:53:15.711019 | orchestrator | changed: [testbed-manager] 2025-03-23 12:53:15.711080 | orchestrator | 2025-03-23 12:53:15.711096 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-23 12:53:15.711122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-23 12:58:02.304415 | orchestrator | 2025-03-23 12:58:02.304526 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-23 12:58:02.304564 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-23 12:58:05.070218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-23 12:58:05.070336 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-23 12:58:05.070356 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-23 12:58:05.070374 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-23 12:58:05.070388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-23 12:58:05.070402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-23 12:58:05.070417 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-23 12:58:05.070431 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-23 12:58:05.070446 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-23 12:58:05.070460 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-23 12:58:05.070474 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-23 12:58:05.070488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-23 12:58:05.070502 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-23 12:58:05.070543 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-23 12:58:05.070558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-23 12:58:05.070572 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-23 12:58:05.070586 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-23 12:58:05.070600 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-23 12:58:05.070626 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-23 12:58:05.070641 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-23 12:58:05.070656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-23 12:58:05.070670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-23 12:58:05.070684 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-03-23 12:58:05.070698 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-03-23 12:58:05.070712 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-03-23 12:58:05.070726 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-03-23 12:58:05.070740 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:05.070755 | orchestrator | 2025-03-23 12:58:05.070801 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-23 12:58:05.070816 | orchestrator | 2025-03-23 12:58:05.070831 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 12:58:05.070862 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:05.200384 | orchestrator | 2025-03-23 12:58:05.200436 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-23 12:58:05.200461 | orchestrator | 2025-03-23 12:58:05.258137 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-23 12:58:05.258198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 12:58:07.375654 | orchestrator | 2025-03-23 12:58:07.375815 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-23 12:58:07.375854 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:07.445071 | orchestrator | 2025-03-23 12:58:07.445113 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-23 12:58:07.445137 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:07.546991 | orchestrator | 2025-03-23 12:58:07.547065 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-23 12:58:07.547094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-23 12:58:10.555100 | orchestrator | 2025-03-23 12:58:10.555202 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-23 12:58:10.555238 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-23 12:58:11.243127 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-23 12:58:11.243225 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-23 12:58:11.243242 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-23 12:58:11.243256 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-23 12:58:11.243271 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-23 12:58:11.243285 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-23 12:58:11.243300 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-23 12:58:11.243341 | orchestrator | 2025-03-23 12:58:11.243356 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-23 12:58:11.243387 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:11.351152 | orchestrator | 2025-03-23 12:58:11.351235 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-23 12:58:11.351264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-23 12:58:12.674629 | orchestrator | 2025-03-23 12:58:12.674731 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-23 12:58:12.674794 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-23 12:58:13.357005 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-23 12:58:13.357106 | orchestrator | 2025-03-23 12:58:13.357123 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-23 12:58:13.357153 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:13.425583 | orchestrator | 2025-03-23 12:58:13.425618 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-23 12:58:13.425640 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:58:13.509942 | orchestrator | 2025-03-23 12:58:13.509976 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-23 12:58:13.509998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-23 12:58:14.947982 | orchestrator | 2025-03-23 12:58:14.948088 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-23 12:58:14.948122 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:58:15.622350 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:58:15.622450 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:15.622468 | orchestrator | 2025-03-23 12:58:15.622483 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-23 12:58:15.622513 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:15.733378 | orchestrator | 2025-03-23 12:58:15.733454 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-23 12:58:15.733483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-23 12:58:16.410604 | orchestrator | 2025-03-23 12:58:16.410708 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-23 12:58:16.410741 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 12:58:17.084851 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:17.084956 | orchestrator | 2025-03-23 12:58:17.084975 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-23 12:58:17.085008 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:17.193456 | orchestrator | 2025-03-23 12:58:17.193537 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-23 12:58:17.193562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-23 12:58:17.893380 | orchestrator | 2025-03-23 12:58:17.893550 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-23 12:58:17.894171 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:18.326793 | orchestrator | 2025-03-23 12:58:18.326896 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-23 12:58:18.326943 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:19.667867 | orchestrator | 2025-03-23 12:58:19.667973 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-23 12:58:19.668007 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-23 12:58:20.403556 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-23 12:58:20.403666 | orchestrator | 2025-03-23 12:58:20.403685 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-23 12:58:20.403716 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:20.812814 | orchestrator | 2025-03-23 12:58:20.812920 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-23 12:58:20.812979 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:20.864927 | orchestrator | 2025-03-23 12:58:20.864965 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-23 12:58:20.864986 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:58:21.569452 | orchestrator | 2025-03-23 12:58:21.569558 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-23 12:58:21.569591 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:21.659253 | orchestrator | 2025-03-23 12:58:21.659318 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-23 12:58:21.659346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-23 12:58:21.722857 | orchestrator | 2025-03-23 12:58:21.722893 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-23 12:58:21.722915 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:23.899926 | orchestrator | 2025-03-23 12:58:23.900040 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-23 12:58:23.900073 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-23 12:58:24.652912 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-23 12:58:24.653006 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-23 12:58:24.653021 | orchestrator | 2025-03-23 12:58:24.653034 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-23 12:58:24.653062 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:24.766453 | orchestrator | 2025-03-23 12:58:24.766498 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-23 12:58:24.766522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-23 12:58:24.814600 | orchestrator | 2025-03-23 12:58:24.814630 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-23 12:58:24.814649 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:25.585713 | orchestrator | 2025-03-23 12:58:25.585878 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-23 12:58:25.585917 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-23 12:58:25.681835 | orchestrator | 2025-03-23 12:58:25.681887 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-23 12:58:25.681916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-23 12:58:26.456630 | orchestrator | 2025-03-23 12:58:26.456696 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-23 12:58:26.456725 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:27.138833 | orchestrator | 2025-03-23 12:58:27.138940 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-23 12:58:27.138973 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:27.203510 | orchestrator | 2025-03-23 12:58:27.203548 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-23 12:58:27.203571 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:58:27.273609 | orchestrator | 2025-03-23 12:58:27.273654 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-23 12:58:27.273681 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:28.181816 | orchestrator | 2025-03-23 12:58:28.181895 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-23 12:58:28.181924 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:53.018808 | orchestrator | 2025-03-23 12:58:53.018935 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-23 12:58:53.018970 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:53.750982 | orchestrator | 2025-03-23 12:58:53.751092 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-23 12:58:53.751125 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:56.460623 | orchestrator | 2025-03-23 12:58:56.460726 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-23 12:58:56.460810 | orchestrator | changed: [testbed-manager] 2025-03-23 12:58:56.529446 | orchestrator | 2025-03-23 12:58:56.529475 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-23 12:58:56.529492 | orchestrator | ok: [testbed-manager] 2025-03-23 12:58:56.620124 | orchestrator | 2025-03-23 12:58:56.620167 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-23 12:58:56.620181 | orchestrator | 2025-03-23 12:58:56.620194 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-23 12:58:56.620216 | orchestrator | skipping: [testbed-manager] 2025-03-23 12:59:56.688932 | orchestrator | 2025-03-23 12:59:56.689055 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-23 12:59:56.689087 | orchestrator | Pausing for 60 seconds 2025-03-23 13:00:04.448879 | orchestrator | changed: [testbed-manager] 2025-03-23 13:00:04.449016 | orchestrator | 2025-03-23 13:00:04.449037 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-23 13:00:04.449070 | orchestrator | changed: [testbed-manager] 2025-03-23 13:00:46.641991 | orchestrator | 2025-03-23 13:00:46.642163 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-23 13:00:46.642199 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-23 13:00:53.173241 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-23 13:00:53.173339 | orchestrator | changed: [testbed-manager] 2025-03-23 13:00:53.173356 | orchestrator | 2025-03-23 13:00:53.173371 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-23 13:00:53.173398 | orchestrator | changed: [testbed-manager] 2025-03-23 13:00:53.268836 | orchestrator | 2025-03-23 13:00:53.268893 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-23 13:00:53.268921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-23 13:00:53.332029 | orchestrator | 2025-03-23 13:00:53.332072 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-23 13:00:53.332088 | orchestrator | 2025-03-23 13:00:53.332116 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-23 13:00:53.332138 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:00:53.479878 | orchestrator | 2025-03-23 13:00:53.479917 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:00:53.479933 | orchestrator | testbed-manager : ok=103 changed=55 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-23 13:00:53.479947 | orchestrator | 2025-03-23 13:00:53.479969 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-23 13:00:53.485761 | orchestrator | + deactivate 2025-03-23 13:00:53.485792 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-23 13:00:53.485807 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-23 13:00:53.485823 | orchestrator | + export PATH 2025-03-23 13:00:53.485837 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-23 13:00:53.485852 | orchestrator | + '[' -n '' ']' 2025-03-23 13:00:53.485867 | orchestrator | + hash -r 2025-03-23 13:00:53.485881 | orchestrator | + '[' -n '' ']' 2025-03-23 13:00:53.485895 | orchestrator | + unset VIRTUAL_ENV 2025-03-23 13:00:53.485909 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-23 13:00:53.485924 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-23 13:00:53.485939 | orchestrator | + unset -f deactivate 2025-03-23 13:00:53.485954 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-23 13:00:53.485975 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-23 13:00:53.486975 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-23 13:00:53.487001 | orchestrator | + local max_attempts=60 2025-03-23 13:00:53.487015 | orchestrator | + local name=ceph-ansible 2025-03-23 13:00:53.487030 | orchestrator | + local attempt_num=1 2025-03-23 13:00:53.487056 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-23 13:00:53.526202 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:00:53.526892 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-23 13:00:53.526921 | orchestrator | + local max_attempts=60 2025-03-23 13:00:53.526966 | orchestrator | + local name=kolla-ansible 2025-03-23 13:00:53.526982 | orchestrator | + local attempt_num=1 2025-03-23 13:00:53.527004 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-23 13:00:53.556255 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:00:53.557504 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-23 13:00:53.557529 | orchestrator | + local max_attempts=60 2025-03-23 13:00:53.557544 | orchestrator | + local name=osism-ansible 2025-03-23 13:00:53.557558 | orchestrator | + local attempt_num=1 2025-03-23 13:00:53.557577 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-23 13:00:53.592878 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:00:55.023070 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-23 13:00:55.023157 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-23 13:00:55.023187 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-23 13:00:55.082470 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-23 13:00:55.407085 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-23 13:00:55.407148 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-23 13:00:55.407166 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-23 13:00:55.413634 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413646 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413651 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-03-23 13:00:55.413670 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-03-23 13:00:55.413692 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413699 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413704 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413710 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 50 seconds (healthy) 2025-03-23 13:00:55.413715 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413720 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-03-23 13:00:55.413725 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413730 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413735 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-03-23 13:00:55.413757 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413763 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413767 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413772 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-03-23 13:00:55.413780 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-23 13:00:55.601986 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-23 13:00:55.611827 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-03-23 13:00:55.611839 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-03-23 13:00:55.611845 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-03-23 13:00:55.611851 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-03-23 13:00:55.611858 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-23 13:00:55.660845 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-23 13:00:55.663450 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-23 13:00:55.663463 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-23 13:00:57.345186 | orchestrator | 2025-03-23 13:00:57 | INFO  | Task 18419a2e-a8fe-4825-ab26-ff3da9a82b68 (resolvconf) was prepared for execution. 2025-03-23 13:01:00.557439 | orchestrator | 2025-03-23 13:00:57 | INFO  | It takes a moment until task 18419a2e-a8fe-4825-ab26-ff3da9a82b68 (resolvconf) has been started and output is visible here. 2025-03-23 13:01:00.557555 | orchestrator | 2025-03-23 13:01:00.557820 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-23 13:01:00.558244 | orchestrator | 2025-03-23 13:01:00.558855 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 13:01:00.559461 | orchestrator | Sunday 23 March 2025 13:01:00 +0000 (0:00:00.100) 0:00:00.100 ********** 2025-03-23 13:01:05.227906 | orchestrator | ok: [testbed-manager] 2025-03-23 13:01:05.228225 | orchestrator | 2025-03-23 13:01:05.229706 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-23 13:01:05.230859 | orchestrator | Sunday 23 March 2025 13:01:05 +0000 (0:00:04.671) 0:00:04.771 ********** 2025-03-23 13:01:05.285454 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:05.286108 | orchestrator | 2025-03-23 13:01:05.287393 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-23 13:01:05.288044 | orchestrator | Sunday 23 March 2025 13:01:05 +0000 (0:00:00.058) 0:00:04.829 ********** 2025-03-23 13:01:05.374252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-23 13:01:05.375147 | orchestrator | 2025-03-23 13:01:05.375736 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-23 13:01:05.376536 | orchestrator | Sunday 23 March 2025 13:01:05 +0000 (0:00:00.088) 0:00:04.918 ********** 2025-03-23 13:01:05.472283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 13:01:05.472984 | orchestrator | 2025-03-23 13:01:05.473369 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-23 13:01:05.474824 | orchestrator | Sunday 23 March 2025 13:01:05 +0000 (0:00:00.097) 0:00:05.015 ********** 2025-03-23 13:01:06.735426 | orchestrator | ok: [testbed-manager] 2025-03-23 13:01:06.736479 | orchestrator | 2025-03-23 13:01:06.736522 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-23 13:01:06.736865 | orchestrator | Sunday 23 March 2025 13:01:06 +0000 (0:00:01.262) 0:00:06.278 ********** 2025-03-23 13:01:06.814454 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:06.815268 | orchestrator | 2025-03-23 13:01:06.816492 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-23 13:01:07.351948 | orchestrator | Sunday 23 March 2025 13:01:06 +0000 (0:00:00.080) 0:00:06.359 ********** 2025-03-23 13:01:07.352063 | orchestrator | ok: [testbed-manager] 2025-03-23 13:01:07.352946 | orchestrator | 2025-03-23 13:01:07.352972 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-23 13:01:07.352991 | orchestrator | Sunday 23 March 2025 13:01:07 +0000 (0:00:00.535) 0:00:06.894 ********** 2025-03-23 13:01:07.437770 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:07.442013 | orchestrator | 2025-03-23 13:01:07.442320 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-23 13:01:08.049291 | orchestrator | Sunday 23 March 2025 13:01:07 +0000 (0:00:00.086) 0:00:06.980 ********** 2025-03-23 13:01:08.049404 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:08.049775 | orchestrator | 2025-03-23 13:01:08.049803 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-23 13:01:08.049824 | orchestrator | Sunday 23 March 2025 13:01:08 +0000 (0:00:00.612) 0:00:07.592 ********** 2025-03-23 13:01:09.285987 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:09.286901 | orchestrator | 2025-03-23 13:01:09.286942 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-23 13:01:09.287864 | orchestrator | Sunday 23 March 2025 13:01:09 +0000 (0:00:01.235) 0:00:08.828 ********** 2025-03-23 13:01:10.308849 | orchestrator | ok: [testbed-manager] 2025-03-23 13:01:10.309050 | orchestrator | 2025-03-23 13:01:10.309077 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-23 13:01:10.310632 | orchestrator | Sunday 23 March 2025 13:01:10 +0000 (0:00:01.023) 0:00:09.851 ********** 2025-03-23 13:01:10.416039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-23 13:01:10.416111 | orchestrator | 2025-03-23 13:01:10.417163 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-23 13:01:10.418064 | orchestrator | Sunday 23 March 2025 13:01:10 +0000 (0:00:00.107) 0:00:09.959 ********** 2025-03-23 13:01:11.610879 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:11.611742 | orchestrator | 2025-03-23 13:01:11.613418 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:01:11.613899 | orchestrator | 2025-03-23 13:01:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:01:11.614154 | orchestrator | 2025-03-23 13:01:11 | INFO  | Please wait and do not abort execution. 2025-03-23 13:01:11.615273 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:01:11.616058 | orchestrator | 2025-03-23 13:01:11.616955 | orchestrator | Sunday 23 March 2025 13:01:11 +0000 (0:00:01.195) 0:00:11.154 ********** 2025-03-23 13:01:11.618062 | orchestrator | =============================================================================== 2025-03-23 13:01:11.618905 | orchestrator | Gathering Facts --------------------------------------------------------- 4.67s 2025-03-23 13:01:11.619377 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.26s 2025-03-23 13:01:11.620461 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.24s 2025-03-23 13:01:11.620875 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2025-03-23 13:01:11.621744 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2025-03-23 13:01:11.621997 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2025-03-23 13:01:11.622462 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2025-03-23 13:01:11.624542 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.11s 2025-03-23 13:01:11.624698 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-03-23 13:01:11.625492 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-03-23 13:01:11.625794 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-03-23 13:01:11.626491 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2025-03-23 13:01:11.626657 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-03-23 13:01:12.053644 | orchestrator | + osism apply sshconfig 2025-03-23 13:01:13.565740 | orchestrator | 2025-03-23 13:01:13 | INFO  | Task aaea21e8-8668-4986-829d-0116b61bf5ff (sshconfig) was prepared for execution. 2025-03-23 13:01:16.797041 | orchestrator | 2025-03-23 13:01:13 | INFO  | It takes a moment until task aaea21e8-8668-4986-829d-0116b61bf5ff (sshconfig) has been started and output is visible here. 2025-03-23 13:01:16.797168 | orchestrator | 2025-03-23 13:01:16.797764 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-23 13:01:16.798715 | orchestrator | 2025-03-23 13:01:16.798920 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-23 13:01:16.799288 | orchestrator | Sunday 23 March 2025 13:01:16 +0000 (0:00:00.122) 0:00:00.122 ********** 2025-03-23 13:01:17.383052 | orchestrator | ok: [testbed-manager] 2025-03-23 13:01:17.384006 | orchestrator | 2025-03-23 13:01:17.384999 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-23 13:01:17.386119 | orchestrator | Sunday 23 March 2025 13:01:17 +0000 (0:00:00.594) 0:00:00.716 ********** 2025-03-23 13:01:17.926924 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:17.927498 | orchestrator | 2025-03-23 13:01:17.927626 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-23 13:01:17.928007 | orchestrator | Sunday 23 March 2025 13:01:17 +0000 (0:00:00.541) 0:00:01.258 ********** 2025-03-23 13:01:24.288958 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-23 13:01:24.289114 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-23 13:01:24.290521 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-23 13:01:24.291424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-23 13:01:24.293152 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-23 13:01:24.294473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-23 13:01:24.295292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-23 13:01:24.295986 | orchestrator | 2025-03-23 13:01:24.296583 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-23 13:01:24.297170 | orchestrator | Sunday 23 March 2025 13:01:24 +0000 (0:00:06.362) 0:00:07.620 ********** 2025-03-23 13:01:24.379317 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:24.379859 | orchestrator | 2025-03-23 13:01:24.379891 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-23 13:01:24.380747 | orchestrator | Sunday 23 March 2025 13:01:24 +0000 (0:00:00.091) 0:00:07.712 ********** 2025-03-23 13:01:25.012194 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:25.013724 | orchestrator | 2025-03-23 13:01:25.014415 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:01:25.015124 | orchestrator | 2025-03-23 13:01:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:01:25.015381 | orchestrator | 2025-03-23 13:01:25 | INFO  | Please wait and do not abort execution. 2025-03-23 13:01:25.015411 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:01:25.016457 | orchestrator | 2025-03-23 13:01:25.017606 | orchestrator | Sunday 23 March 2025 13:01:25 +0000 (0:00:00.634) 0:00:08.346 ********** 2025-03-23 13:01:25.017793 | orchestrator | =============================================================================== 2025-03-23 13:01:25.018228 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.36s 2025-03-23 13:01:25.018781 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-03-23 13:01:25.018853 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-03-23 13:01:25.020341 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-03-23 13:01:25.541521 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-03-23 13:01:25.541611 | orchestrator | + osism apply known-hosts 2025-03-23 13:01:27.098261 | orchestrator | 2025-03-23 13:01:27 | INFO  | Task c0a87111-b578-4c70-87ea-5f7de46b3454 (known-hosts) was prepared for execution. 2025-03-23 13:01:30.388974 | orchestrator | 2025-03-23 13:01:27 | INFO  | It takes a moment until task c0a87111-b578-4c70-87ea-5f7de46b3454 (known-hosts) has been started and output is visible here. 2025-03-23 13:01:30.389108 | orchestrator | 2025-03-23 13:01:30.389179 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-23 13:01:30.389702 | orchestrator | 2025-03-23 13:01:30.390501 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-23 13:01:30.391041 | orchestrator | Sunday 23 March 2025 13:01:30 +0000 (0:00:00.130) 0:00:00.130 ********** 2025-03-23 13:01:36.583884 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-23 13:01:36.584071 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-23 13:01:36.584414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-23 13:01:36.584824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-23 13:01:36.584872 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-23 13:01:36.585619 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-23 13:01:36.586972 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-23 13:01:36.760093 | orchestrator | 2025-03-23 13:01:36.760166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-23 13:01:36.760184 | orchestrator | Sunday 23 March 2025 13:01:36 +0000 (0:00:06.196) 0:00:06.327 ********** 2025-03-23 13:01:36.760211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-23 13:01:36.760548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-23 13:01:36.760725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-23 13:01:36.764094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-23 13:01:36.764433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-23 13:01:36.764706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-23 13:01:36.765324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-23 13:01:36.766254 | orchestrator | 2025-03-23 13:01:36.768463 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:36.773719 | orchestrator | Sunday 23 March 2025 13:01:36 +0000 (0:00:00.177) 0:00:06.504 ********** 2025-03-23 13:01:38.041866 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClG/L205xNX3awYZvFFKdnMaP+Aqf3Iwm+WOERw3898eVcckCAQGjI6tYk+aCqPdahTh68Byccbk4L/1hBlS/YEeFfTE68y6NL8MZaLTNQjBSGOOsVPQDnrBTYVPHn2klniKdBbXL0shzFeu4imo3gJXtY54kyAyW++6NtrB+NW2UiemlQZJOsOZlqdlyJ+dgcft8iqN78QDXbbtRwtcM94CTRHcNESaV8sh0gCPbUBhpSMTxcd87+MLDYTX88N4BIHTkNY4O6WUgwj2sri31fGtDchAAgGiTHIz9pTx2kZHxc1AUBI/LBlVFBcnt9Zla7sqIwdwbp35OGKYx6lGAT3kYd3y52WcMyDEP7H439T3O0XkVM/XEbe6Ch6iA2fP7vUixZ5iLP/7BPfnIYFDxDPt3LbBX1W37JApDz4lza2EMenZTBS4ls8J/6n7BrrEWc7c9JGvzpcsSX0wWwtDgtEMW/O6k5ObHmH1eEbT0ge6hfpemXnrtgHuSp5nBTrX0=) 2025-03-23 13:01:38.045193 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvi5hosA47sC48Ha3SzgQSA/C3xrigUzW7nGbRjd4K+g3Acq9aPAG3HqK/6lofdq4/nxGWIZqS9p0JE8vDDRak=) 2025-03-23 13:01:38.046919 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHT7inDFDBCZ5sdUvOg0AfX7LX+uqeOcOJsptH5Y30sh) 2025-03-23 13:01:38.047325 | orchestrator | 2025-03-23 13:01:38.047899 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:38.048920 | orchestrator | Sunday 23 March 2025 13:01:38 +0000 (0:00:01.281) 0:00:07.786 ********** 2025-03-23 13:01:39.187208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsHVFRqNhGOxr51A+A8yQ1XsYkMLYy9/BfrhIKeZNEmZZshmuM9jbiZKNcfTxXHaDmQ/dVvhLkScF9czZUy0qE=) 2025-03-23 13:01:39.188025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfxgErTDt+A0j3VblMPmCf+hbp2y1EEfC34MLuXhLS92x1aB29jc986Mqh7dw/0+Aa03aFg+/JofhJz8mpyLbWs7nZB8KNjDuzfvUGNK9/hqmo93Ifs+5OXrcI83yyfjRmYNLB59yUEgDmm2fllxCzS2bQ9T2OWXkYeDWWv2wEHmI/wUzu+9KHQwQdPBNbNU0VeFynqcPBw07cAZB30TSzvpazvcyI1tX4FJ945Z4b0uVRKwPJzQ6pFtVwcvPo30BRdhyBxLHHZ3U13efweUBnj6vE4nYgxKkHfeRwCQmP+y9sZvpo3CZjpjRaieiQsknh6UU4wbAzbRlgPwDLau64AaMmuBsU+O9byl8Lon9p4qF7mA0N+HUIOFYBjjegUw3fwsnIG0/v2x9J65xG/C5pSbBW/KypNQGpX8XytcDtNlnj1Dxx1Wmrz1GoARbWCKGgzQxcX/IeyYIauGDnowq2cMcAWz1X6kxvI9Gy+JBCMfNl54Kbf8eTZBmAt4TbLOU=) 2025-03-23 13:01:39.189466 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKE1A40vWNaA///Jjs9qDm1FMPKtg0J+C6HOqgRIdM4f) 2025-03-23 13:01:39.190161 | orchestrator | 2025-03-23 13:01:39.191107 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:39.192547 | orchestrator | Sunday 23 March 2025 13:01:39 +0000 (0:00:01.144) 0:00:08.931 ********** 2025-03-23 13:01:40.351441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKyT93gtyVYHVqdsAY0Ug/AjNIsILBEKw1sUkF9I3RBu/M/MehDR7wrZHA1YzB3sR2WN324IZQiy8gb15w5MkR5YUqz6Xv2TMBr9inaM9nVjT4mtmeIjm2RfSkkQ7jyqpDQYSlmcXl2VSimVpB5fsI3bPVQOnw61n5kNS6fQtCHgWcqg64J9ERfiOjJYcclp3+DLc0EEQWzHn2ZyBJkOkDxHK+Z+DgqPAdjirVONKSk0iE+qgVbRNaF/nEd6ok1+DhfhybHrRGPFxYcvIzwJsLh6/QIZBSbHwgHpTpkSjay/P0d5Uyq8ZaoCOoRKYPI+DNLed8YMhb2Wu0dNrpsvLUjzpSHfcVta4c5gEIDh7/+KWaxT9Z8/vP+CMe2Cb3MGAft5ahMBAwIlZ6s3GZASWLxJkWXGlkO+CNQq8tSl1GZHquxZtfvm9pfZ43Ry8mqltj5KmgcypQHoJrV6luwFbOWnX5RVCFSB67HcwJE1CdlbOhebpJBbEhzD29Z9TMtuc=) 2025-03-23 13:01:40.352015 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJebxO6eWrj2X2QiEOeOEzyrkPl2k2lDpegw2loABF/+SXGrcwEqC5UN4NCV1WvDTaZrvs6VadXURqGIT4Blm2A=) 2025-03-23 13:01:40.352056 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILc3GGyUEjgmDwNK5D13X+f6ipLOF6Df0XCF+XG85U2b) 2025-03-23 13:01:40.353043 | orchestrator | 2025-03-23 13:01:40.353593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:40.354846 | orchestrator | Sunday 23 March 2025 13:01:40 +0000 (0:00:01.163) 0:00:10.094 ********** 2025-03-23 13:01:41.518398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKuQXoq/bGewZL7tmT0U+yLdMQFS7qQ8X4hAlZ9ScX84cnRxLXGMmg1hfB3F1owZl+ycLssTnmklUuTNNN+zOh6MN/1q9KNQyDeOupx1NzFzE4gMzHNmV8bek0OPCyZ+mubFyXeNwHA/ooTw6TlIz1MFcYR09xXqERYQBuINm4Alcel3FhHABv3HQXdH63ujmAMFN5GhnPYRwHkXtfsb4ZCfQ0A54Mxo3BMZpDAZf1cbuRrMsNpAktwQ2JH5G8x78Z5XKPF58VgyY1mom4BAZwGPlHbVCwik9oxYbeG7d7Suy0DZGz9fYzSYykHPaPwUp2gsvfaGZlV0syW8C8UMmqsqM8lAe8bkSEy8y0rWndz4tLkZ9w2OTwhO405cieEqmr4Kity/EbYeJG9y3L6Ai7SXnxLOQlhfWgNeYN0ViIT4+02HSTrr6kIK7ZnO4QLwVTnG0mV6MWsEvCmH0JvlZsoaD+KQYIT4YHBcfO3xJ9H0QVwuN9WxSvo/QcLqxd8mc=) 2025-03-23 13:01:41.519189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKswL3J/Rj/d8mlhjvjv6k+vtFEyQfe5IhPo4tB9Uv2SbdnN8lVIxGaTThXR7kPaA5BgawciTBIalIjTW9BpRAQ=) 2025-03-23 13:01:41.519438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIADeaG/kdEpC7fPjSHBK3D6uu3nyak7B6ArJceSmVhD/) 2025-03-23 13:01:41.519937 | orchestrator | 2025-03-23 13:01:41.520620 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:41.521081 | orchestrator | Sunday 23 March 2025 13:01:41 +0000 (0:00:01.163) 0:00:11.257 ********** 2025-03-23 13:01:42.710901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0vupw5p4pzwEVNl4ox2gUzw8CHRyAORGO4hZFhyEIeXAEaG20H4iBzNZxQcDwBNjd+fWeUkC4rs7FYdqYjAenIRQI9k3LpT7fNN7iyIs/vv6nKxeZNmBemxPSwHh4GBmkihBvw9ZvkaJr6ECRdJcMHST49HiTpmnfliKombjOTJkNAF4eVLa36cVM832MhSN0ES01DRKtT8nZPjIgYcBJSSgFzA/PsWSA02wJVtPFIsOUyAXPkJjK7MK+ATRjOXnRXQt2g+gCEDNEgOyWoOkHPv3IqPlmbTq1ongFpBZhBs+9pCa+wl4D+84QmMrWCkaGmQQPN84WzliOFbThQEPQgrVytv0F75NH7Etnqr67SeHIIu33uHtcITV9hyeTqnYdDoUhzZNOUSGCxMHpFHaY+4bJ7h8QdJk/nB1C4FqgBCVENRr46uVl1HTBQYmMNLVjKa/l9zq8sG6OT229dcdWqSUO1zy8mKbhMh3QGoOcv70Gz2FKsHDCo1bndLSmB4E=) 2025-03-23 13:01:42.711233 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM60cAKWUSaxZUcHmR+ipGOr5S9e9mKNvMiq1mPVikZ2KcuDm2GK5yZp3Shb1wRuEhBrPSj/uN7Hfb73MGvi6wk=) 2025-03-23 13:01:42.711947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMHu0nIj1Dne/w4LZKrlF9+jmdlmojJMxHccJjggb35b) 2025-03-23 13:01:42.712000 | orchestrator | 2025-03-23 13:01:42.715245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:43.847016 | orchestrator | Sunday 23 March 2025 13:01:42 +0000 (0:00:01.197) 0:00:12.454 ********** 2025-03-23 13:01:43.847149 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC63obq/rDgkoDO34lMQhUoCw+7SZOW07Yditk9ylhi7/hZLFiJNFYl7DqL2rx/bnGaR69NArk5XPvPrUvH0Nu/SZLXH0AjVWXXQqQ/+2PkYjxO5xHQ3NzXLRhaqJPUMFqIYvkvdLU6C9ShtoAPGze4htoS9xQ6XNuo+R07pOwEWsU8MMDgKxF+91XWOKFWKcoAQZEonFUea6B+phf8YkwHlvSTxVWQZWHlyIkS83jvgt2bAFDxYNSbMecw77CZrGOLvR5Pv5WJuG2jxSRqNoj9eQzrzhKSJ3yEO+S/YHHg5D0MKHu8/QBgpkK2JrB6MwpZEYKnHFuS49D5n/svBCtvCuw61gj7WWNUS+87QtvR5/0scqCGS2cVdvuDFwvtKsMWYdCsXoT3VXZgvqqgugqFWyQJUDfD6ceQiHrho7COognPCKHakNg69R4kE6oESMyussPcEbB7A2KualKHBAqs68w7uUu5CdKv4G7v0XJuBB6Aiy6vRSRjY0Zh3JPfvVM=) 2025-03-23 13:01:43.847627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN90pOg1DDYnMn7TbLj8EvHVBuuhkg23dzsh7h0l1zzlisAhOgZ8J5iJA4jdJkg9ZRZmBO7gnhqtFkTgCgiTPGo=) 2025-03-23 13:01:43.848319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWJ65WHdxu5IDABitj06/g+vVJqtXlZu+GFrb/zfe/n) 2025-03-23 13:01:43.848847 | orchestrator | 2025-03-23 13:01:43.849576 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:43.849966 | orchestrator | Sunday 23 March 2025 13:01:43 +0000 (0:00:01.133) 0:00:13.588 ********** 2025-03-23 13:01:45.022404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkhnQ4zcnny6GtqR3G3W3eBxkdGUKRUAsL7k8eeVsmQDqzusIYaDyRYY79YGR2Kwllo/2o7R7zL8EWgYo32KUaYI4kwnxoP0OWnEWx00O6KDwBUtFlf/K3mAB75QaVyYfMCKgQS//50g9A3QDjJYpaSqPlqQY1VpLoSksYf3SPwh44kCEr2O2B6gNWYSKYpvH5+kC6dF0gBC9InAngv8gWCMJ6axtkucEmx8LxrhywOSd0T5iIgDaoQPk5mznw0boKWqK4WZxF87sar4mvN75+MsVTfLCT3s4fkWb8FPRlNnXIzqeevh0H/3RNZflfhV07eH1BTIfyoMdHwWOGgQERj7yR5VjI6B3yHMEniJCkSJ9qtLkPW91Ugy+qbeH1BYf0JVrpI8kK6zDMn/Vyy1TbFFErjicMLehZcDTI269co5VzRBOOYas9+842UcBPwU5iv1X762STfS3xa2YYWjjy5fQ3EsM9WjsrnCCQK5ZmH/OsXMWKIYvA1+TVVlYsCNE=) 2025-03-23 13:01:45.023357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBISQNX7RMbuZumJPNX8dgBqvixf6xWTFURbYtV1rQTiK3ghJl73aPxHJocr+YjCl9GdyF35VoaiLhQr56P2nNCM=) 2025-03-23 13:01:45.024177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPmVLYFmK198TNjYs8HPghvQ8+U7+noTWy7JkTQY/58) 2025-03-23 13:01:45.025036 | orchestrator | 2025-03-23 13:01:45.025983 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-23 13:01:45.026870 | orchestrator | Sunday 23 March 2025 13:01:45 +0000 (0:00:01.177) 0:00:14.766 ********** 2025-03-23 13:01:50.486507 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-23 13:01:50.486792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-23 13:01:50.487109 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-23 13:01:50.487528 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-23 13:01:50.488205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-23 13:01:50.489212 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-23 13:01:50.490608 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-23 13:01:50.490645 | orchestrator | 2025-03-23 13:01:50.491050 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-23 13:01:50.491451 | orchestrator | Sunday 23 March 2025 13:01:50 +0000 (0:00:05.464) 0:00:20.230 ********** 2025-03-23 13:01:50.673612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-23 13:01:50.675989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-23 13:01:50.677388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-23 13:01:50.678462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-23 13:01:50.678843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-23 13:01:50.679820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-23 13:01:50.681808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-23 13:01:50.682685 | orchestrator | 2025-03-23 13:01:50.682725 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:50.683532 | orchestrator | Sunday 23 March 2025 13:01:50 +0000 (0:00:00.188) 0:00:20.419 ********** 2025-03-23 13:01:51.824445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClG/L205xNX3awYZvFFKdnMaP+Aqf3Iwm+WOERw3898eVcckCAQGjI6tYk+aCqPdahTh68Byccbk4L/1hBlS/YEeFfTE68y6NL8MZaLTNQjBSGOOsVPQDnrBTYVPHn2klniKdBbXL0shzFeu4imo3gJXtY54kyAyW++6NtrB+NW2UiemlQZJOsOZlqdlyJ+dgcft8iqN78QDXbbtRwtcM94CTRHcNESaV8sh0gCPbUBhpSMTxcd87+MLDYTX88N4BIHTkNY4O6WUgwj2sri31fGtDchAAgGiTHIz9pTx2kZHxc1AUBI/LBlVFBcnt9Zla7sqIwdwbp35OGKYx6lGAT3kYd3y52WcMyDEP7H439T3O0XkVM/XEbe6Ch6iA2fP7vUixZ5iLP/7BPfnIYFDxDPt3LbBX1W37JApDz4lza2EMenZTBS4ls8J/6n7BrrEWc7c9JGvzpcsSX0wWwtDgtEMW/O6k5ObHmH1eEbT0ge6hfpemXnrtgHuSp5nBTrX0=) 2025-03-23 13:01:51.825768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvi5hosA47sC48Ha3SzgQSA/C3xrigUzW7nGbRjd4K+g3Acq9aPAG3HqK/6lofdq4/nxGWIZqS9p0JE8vDDRak=) 2025-03-23 13:01:51.826893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHT7inDFDBCZ5sdUvOg0AfX7LX+uqeOcOJsptH5Y30sh) 2025-03-23 13:01:51.827791 | orchestrator | 2025-03-23 13:01:51.828430 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:51.829368 | orchestrator | Sunday 23 March 2025 13:01:51 +0000 (0:00:01.148) 0:00:21.567 ********** 2025-03-23 13:01:52.976225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfxgErTDt+A0j3VblMPmCf+hbp2y1EEfC34MLuXhLS92x1aB29jc986Mqh7dw/0+Aa03aFg+/JofhJz8mpyLbWs7nZB8KNjDuzfvUGNK9/hqmo93Ifs+5OXrcI83yyfjRmYNLB59yUEgDmm2fllxCzS2bQ9T2OWXkYeDWWv2wEHmI/wUzu+9KHQwQdPBNbNU0VeFynqcPBw07cAZB30TSzvpazvcyI1tX4FJ945Z4b0uVRKwPJzQ6pFtVwcvPo30BRdhyBxLHHZ3U13efweUBnj6vE4nYgxKkHfeRwCQmP+y9sZvpo3CZjpjRaieiQsknh6UU4wbAzbRlgPwDLau64AaMmuBsU+O9byl8Lon9p4qF7mA0N+HUIOFYBjjegUw3fwsnIG0/v2x9J65xG/C5pSbBW/KypNQGpX8XytcDtNlnj1Dxx1Wmrz1GoARbWCKGgzQxcX/IeyYIauGDnowq2cMcAWz1X6kxvI9Gy+JBCMfNl54Kbf8eTZBmAt4TbLOU=) 2025-03-23 13:01:52.977273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsHVFRqNhGOxr51A+A8yQ1XsYkMLYy9/BfrhIKeZNEmZZshmuM9jbiZKNcfTxXHaDmQ/dVvhLkScF9czZUy0qE=) 2025-03-23 13:01:52.977892 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKE1A40vWNaA///Jjs9qDm1FMPKtg0J+C6HOqgRIdM4f) 2025-03-23 13:01:52.978592 | orchestrator | 2025-03-23 13:01:52.979481 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:52.980895 | orchestrator | Sunday 23 March 2025 13:01:52 +0000 (0:00:01.151) 0:00:22.719 ********** 2025-03-23 13:01:54.110145 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJebxO6eWrj2X2QiEOeOEzyrkPl2k2lDpegw2loABF/+SXGrcwEqC5UN4NCV1WvDTaZrvs6VadXURqGIT4Blm2A=) 2025-03-23 13:01:54.110956 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILc3GGyUEjgmDwNK5D13X+f6ipLOF6Df0XCF+XG85U2b) 2025-03-23 13:01:54.111007 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKyT93gtyVYHVqdsAY0Ug/AjNIsILBEKw1sUkF9I3RBu/M/MehDR7wrZHA1YzB3sR2WN324IZQiy8gb15w5MkR5YUqz6Xv2TMBr9inaM9nVjT4mtmeIjm2RfSkkQ7jyqpDQYSlmcXl2VSimVpB5fsI3bPVQOnw61n5kNS6fQtCHgWcqg64J9ERfiOjJYcclp3+DLc0EEQWzHn2ZyBJkOkDxHK+Z+DgqPAdjirVONKSk0iE+qgVbRNaF/nEd6ok1+DhfhybHrRGPFxYcvIzwJsLh6/QIZBSbHwgHpTpkSjay/P0d5Uyq8ZaoCOoRKYPI+DNLed8YMhb2Wu0dNrpsvLUjzpSHfcVta4c5gEIDh7/+KWaxT9Z8/vP+CMe2Cb3MGAft5ahMBAwIlZ6s3GZASWLxJkWXGlkO+CNQq8tSl1GZHquxZtfvm9pfZ43Ry8mqltj5KmgcypQHoJrV6luwFbOWnX5RVCFSB67HcwJE1CdlbOhebpJBbEhzD29Z9TMtuc=) 2025-03-23 13:01:54.112050 | orchestrator | 2025-03-23 13:01:54.112603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:54.113257 | orchestrator | Sunday 23 March 2025 13:01:54 +0000 (0:00:01.133) 0:00:23.853 ********** 2025-03-23 13:01:55.258538 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKswL3J/Rj/d8mlhjvjv6k+vtFEyQfe5IhPo4tB9Uv2SbdnN8lVIxGaTThXR7kPaA5BgawciTBIalIjTW9BpRAQ=) 2025-03-23 13:01:55.258916 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKuQXoq/bGewZL7tmT0U+yLdMQFS7qQ8X4hAlZ9ScX84cnRxLXGMmg1hfB3F1owZl+ycLssTnmklUuTNNN+zOh6MN/1q9KNQyDeOupx1NzFzE4gMzHNmV8bek0OPCyZ+mubFyXeNwHA/ooTw6TlIz1MFcYR09xXqERYQBuINm4Alcel3FhHABv3HQXdH63ujmAMFN5GhnPYRwHkXtfsb4ZCfQ0A54Mxo3BMZpDAZf1cbuRrMsNpAktwQ2JH5G8x78Z5XKPF58VgyY1mom4BAZwGPlHbVCwik9oxYbeG7d7Suy0DZGz9fYzSYykHPaPwUp2gsvfaGZlV0syW8C8UMmqsqM8lAe8bkSEy8y0rWndz4tLkZ9w2OTwhO405cieEqmr4Kity/EbYeJG9y3L6Ai7SXnxLOQlhfWgNeYN0ViIT4+02HSTrr6kIK7ZnO4QLwVTnG0mV6MWsEvCmH0JvlZsoaD+KQYIT4YHBcfO3xJ9H0QVwuN9WxSvo/QcLqxd8mc=) 2025-03-23 13:01:55.259325 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIADeaG/kdEpC7fPjSHBK3D6uu3nyak7B6ArJceSmVhD/) 2025-03-23 13:01:55.259818 | orchestrator | 2025-03-23 13:01:55.260250 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:55.261567 | orchestrator | Sunday 23 March 2025 13:01:55 +0000 (0:00:01.151) 0:00:25.004 ********** 2025-03-23 13:01:56.360315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0vupw5p4pzwEVNl4ox2gUzw8CHRyAORGO4hZFhyEIeXAEaG20H4iBzNZxQcDwBNjd+fWeUkC4rs7FYdqYjAenIRQI9k3LpT7fNN7iyIs/vv6nKxeZNmBemxPSwHh4GBmkihBvw9ZvkaJr6ECRdJcMHST49HiTpmnfliKombjOTJkNAF4eVLa36cVM832MhSN0ES01DRKtT8nZPjIgYcBJSSgFzA/PsWSA02wJVtPFIsOUyAXPkJjK7MK+ATRjOXnRXQt2g+gCEDNEgOyWoOkHPv3IqPlmbTq1ongFpBZhBs+9pCa+wl4D+84QmMrWCkaGmQQPN84WzliOFbThQEPQgrVytv0F75NH7Etnqr67SeHIIu33uHtcITV9hyeTqnYdDoUhzZNOUSGCxMHpFHaY+4bJ7h8QdJk/nB1C4FqgBCVENRr46uVl1HTBQYmMNLVjKa/l9zq8sG6OT229dcdWqSUO1zy8mKbhMh3QGoOcv70Gz2FKsHDCo1bndLSmB4E=) 2025-03-23 13:01:56.361089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM60cAKWUSaxZUcHmR+ipGOr5S9e9mKNvMiq1mPVikZ2KcuDm2GK5yZp3Shb1wRuEhBrPSj/uN7Hfb73MGvi6wk=) 2025-03-23 13:01:56.361568 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMHu0nIj1Dne/w4LZKrlF9+jmdlmojJMxHccJjggb35b) 2025-03-23 13:01:56.362384 | orchestrator | 2025-03-23 13:01:56.362498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:56.362803 | orchestrator | Sunday 23 March 2025 13:01:56 +0000 (0:00:01.100) 0:00:26.104 ********** 2025-03-23 13:01:57.518254 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC63obq/rDgkoDO34lMQhUoCw+7SZOW07Yditk9ylhi7/hZLFiJNFYl7DqL2rx/bnGaR69NArk5XPvPrUvH0Nu/SZLXH0AjVWXXQqQ/+2PkYjxO5xHQ3NzXLRhaqJPUMFqIYvkvdLU6C9ShtoAPGze4htoS9xQ6XNuo+R07pOwEWsU8MMDgKxF+91XWOKFWKcoAQZEonFUea6B+phf8YkwHlvSTxVWQZWHlyIkS83jvgt2bAFDxYNSbMecw77CZrGOLvR5Pv5WJuG2jxSRqNoj9eQzrzhKSJ3yEO+S/YHHg5D0MKHu8/QBgpkK2JrB6MwpZEYKnHFuS49D5n/svBCtvCuw61gj7WWNUS+87QtvR5/0scqCGS2cVdvuDFwvtKsMWYdCsXoT3VXZgvqqgugqFWyQJUDfD6ceQiHrho7COognPCKHakNg69R4kE6oESMyussPcEbB7A2KualKHBAqs68w7uUu5CdKv4G7v0XJuBB6Aiy6vRSRjY0Zh3JPfvVM=) 2025-03-23 13:01:57.518532 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN90pOg1DDYnMn7TbLj8EvHVBuuhkg23dzsh7h0l1zzlisAhOgZ8J5iJA4jdJkg9ZRZmBO7gnhqtFkTgCgiTPGo=) 2025-03-23 13:01:57.518641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWJ65WHdxu5IDABitj06/g+vVJqtXlZu+GFrb/zfe/n) 2025-03-23 13:01:57.518737 | orchestrator | 2025-03-23 13:01:57.518770 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-23 13:01:57.519393 | orchestrator | Sunday 23 March 2025 13:01:57 +0000 (0:00:01.155) 0:00:27.260 ********** 2025-03-23 13:01:58.615856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPmVLYFmK198TNjYs8HPghvQ8+U7+noTWy7JkTQY/58) 2025-03-23 13:01:58.617381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkhnQ4zcnny6GtqR3G3W3eBxkdGUKRUAsL7k8eeVsmQDqzusIYaDyRYY79YGR2Kwllo/2o7R7zL8EWgYo32KUaYI4kwnxoP0OWnEWx00O6KDwBUtFlf/K3mAB75QaVyYfMCKgQS//50g9A3QDjJYpaSqPlqQY1VpLoSksYf3SPwh44kCEr2O2B6gNWYSKYpvH5+kC6dF0gBC9InAngv8gWCMJ6axtkucEmx8LxrhywOSd0T5iIgDaoQPk5mznw0boKWqK4WZxF87sar4mvN75+MsVTfLCT3s4fkWb8FPRlNnXIzqeevh0H/3RNZflfhV07eH1BTIfyoMdHwWOGgQERj7yR5VjI6B3yHMEniJCkSJ9qtLkPW91Ugy+qbeH1BYf0JVrpI8kK6zDMn/Vyy1TbFFErjicMLehZcDTI269co5VzRBOOYas9+842UcBPwU5iv1X762STfS3xa2YYWjjy5fQ3EsM9WjsrnCCQK5ZmH/OsXMWKIYvA1+TVVlYsCNE=) 2025-03-23 13:01:58.617420 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBISQNX7RMbuZumJPNX8dgBqvixf6xWTFURbYtV1rQTiK3ghJl73aPxHJocr+YjCl9GdyF35VoaiLhQr56P2nNCM=) 2025-03-23 13:01:58.617976 | orchestrator | 2025-03-23 13:01:58.619145 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-23 13:01:58.621699 | orchestrator | Sunday 23 March 2025 13:01:58 +0000 (0:00:01.098) 0:00:28.359 ********** 2025-03-23 13:01:58.798491 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-23 13:01:58.798964 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-23 13:01:58.799786 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-23 13:01:58.801262 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-23 13:01:58.802135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-23 13:01:58.803117 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-23 13:01:58.803924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-23 13:01:58.804946 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:58.805466 | orchestrator | 2025-03-23 13:01:58.806377 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-23 13:01:58.807826 | orchestrator | Sunday 23 March 2025 13:01:58 +0000 (0:00:00.181) 0:00:28.540 ********** 2025-03-23 13:01:58.861965 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:58.862938 | orchestrator | 2025-03-23 13:01:58.863839 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-23 13:01:58.864338 | orchestrator | Sunday 23 March 2025 13:01:58 +0000 (0:00:00.065) 0:00:28.605 ********** 2025-03-23 13:01:58.931890 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:01:58.932708 | orchestrator | 2025-03-23 13:01:58.934149 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-23 13:01:58.935078 | orchestrator | Sunday 23 March 2025 13:01:58 +0000 (0:00:00.068) 0:00:28.674 ********** 2025-03-23 13:01:59.721041 | orchestrator | changed: [testbed-manager] 2025-03-23 13:01:59.721270 | orchestrator | 2025-03-23 13:01:59.725257 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:01:59.726793 | orchestrator | 2025-03-23 13:01:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:01:59.726823 | orchestrator | 2025-03-23 13:01:59 | INFO  | Please wait and do not abort execution. 2025-03-23 13:01:59.726845 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:01:59.727430 | orchestrator | 2025-03-23 13:01:59.728425 | orchestrator | Sunday 23 March 2025 13:01:59 +0000 (0:00:00.788) 0:00:29.462 ********** 2025-03-23 13:01:59.729287 | orchestrator | =============================================================================== 2025-03-23 13:01:59.729962 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.20s 2025-03-23 13:01:59.730212 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.46s 2025-03-23 13:01:59.731405 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2025-03-23 13:01:59.731793 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-23 13:01:59.732478 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-23 13:01:59.734103 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-23 13:01:59.734959 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-23 13:01:59.736217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-23 13:01:59.737966 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-23 13:01:59.739002 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-23 13:01:59.741721 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-23 13:01:59.743631 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-03-23 13:01:59.744211 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-03-23 13:01:59.744900 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-03-23 13:01:59.745922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-03-23 13:01:59.746532 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-03-23 13:01:59.747420 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2025-03-23 13:01:59.748013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-03-23 13:01:59.748884 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-03-23 13:01:59.749598 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-03-23 13:02:00.161903 | orchestrator | + osism apply squid 2025-03-23 13:02:01.728087 | orchestrator | 2025-03-23 13:02:01 | INFO  | Task b7e1439b-19ad-42ca-b352-7a4790bbb14f (squid) was prepared for execution. 2025-03-23 13:02:05.261366 | orchestrator | 2025-03-23 13:02:01 | INFO  | It takes a moment until task b7e1439b-19ad-42ca-b352-7a4790bbb14f (squid) has been started and output is visible here. 2025-03-23 13:02:05.261461 | orchestrator | 2025-03-23 13:02:05.263939 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-23 13:02:05.263972 | orchestrator | 2025-03-23 13:02:05.264013 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-23 13:02:05.265199 | orchestrator | Sunday 23 March 2025 13:02:05 +0000 (0:00:00.111) 0:00:00.111 ********** 2025-03-23 13:02:05.353512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 13:02:05.354089 | orchestrator | 2025-03-23 13:02:05.354813 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-23 13:02:05.358612 | orchestrator | Sunday 23 March 2025 13:02:05 +0000 (0:00:00.099) 0:00:00.210 ********** 2025-03-23 13:02:06.890489 | orchestrator | ok: [testbed-manager] 2025-03-23 13:02:06.891764 | orchestrator | 2025-03-23 13:02:06.894508 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-23 13:02:06.896188 | orchestrator | Sunday 23 March 2025 13:02:06 +0000 (0:00:01.534) 0:00:01.745 ********** 2025-03-23 13:02:08.116040 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-23 13:02:08.117641 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-23 13:02:08.118185 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-23 13:02:08.118784 | orchestrator | 2025-03-23 13:02:08.119612 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-23 13:02:08.120036 | orchestrator | Sunday 23 March 2025 13:02:08 +0000 (0:00:01.225) 0:00:02.971 ********** 2025-03-23 13:02:09.332386 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-23 13:02:09.332581 | orchestrator | 2025-03-23 13:02:09.333236 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-23 13:02:09.333540 | orchestrator | Sunday 23 March 2025 13:02:09 +0000 (0:00:01.216) 0:00:04.188 ********** 2025-03-23 13:02:09.723034 | orchestrator | ok: [testbed-manager] 2025-03-23 13:02:09.725968 | orchestrator | 2025-03-23 13:02:09.726111 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-23 13:02:09.727189 | orchestrator | Sunday 23 March 2025 13:02:09 +0000 (0:00:00.391) 0:00:04.579 ********** 2025-03-23 13:02:10.769899 | orchestrator | changed: [testbed-manager] 2025-03-23 13:02:10.770529 | orchestrator | 2025-03-23 13:02:10.772874 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-23 13:02:10.773618 | orchestrator | Sunday 23 March 2025 13:02:10 +0000 (0:00:01.045) 0:00:05.624 ********** 2025-03-23 13:02:38.834578 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-23 13:02:38.834892 | orchestrator | ok: [testbed-manager] 2025-03-23 13:02:38.834960 | orchestrator | 2025-03-23 13:02:38.834984 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-23 13:02:38.835049 | orchestrator | Sunday 23 March 2025 13:02:38 +0000 (0:00:28.063) 0:00:33.687 ********** 2025-03-23 13:02:51.300880 | orchestrator | changed: [testbed-manager] 2025-03-23 13:03:51.373906 | orchestrator | 2025-03-23 13:03:51.374108 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-23 13:03:51.374134 | orchestrator | Sunday 23 March 2025 13:02:51 +0000 (0:00:12.467) 0:00:46.155 ********** 2025-03-23 13:03:51.374165 | orchestrator | Pausing for 60 seconds 2025-03-23 13:03:51.447269 | orchestrator | changed: [testbed-manager] 2025-03-23 13:03:51.447385 | orchestrator | 2025-03-23 13:03:51.447404 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-23 13:03:51.447420 | orchestrator | Sunday 23 March 2025 13:03:51 +0000 (0:01:00.070) 0:01:46.226 ********** 2025-03-23 13:03:51.447449 | orchestrator | ok: [testbed-manager] 2025-03-23 13:03:51.447514 | orchestrator | 2025-03-23 13:03:51.447819 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-23 13:03:51.448494 | orchestrator | Sunday 23 March 2025 13:03:51 +0000 (0:00:00.075) 0:01:46.301 ********** 2025-03-23 13:03:52.085121 | orchestrator | changed: [testbed-manager] 2025-03-23 13:03:52.085292 | orchestrator | 2025-03-23 13:03:52.085315 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:03:52.085335 | orchestrator | 2025-03-23 13:03:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:03:52.085712 | orchestrator | 2025-03-23 13:03:52 | INFO  | Please wait and do not abort execution. 2025-03-23 13:03:52.085746 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:03:52.086108 | orchestrator | 2025-03-23 13:03:52.086795 | orchestrator | Sunday 23 March 2025 13:03:52 +0000 (0:00:00.640) 0:01:46.941 ********** 2025-03-23 13:03:52.087512 | orchestrator | =============================================================================== 2025-03-23 13:03:52.087731 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-03-23 13:03:52.088119 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 28.06s 2025-03-23 13:03:52.089098 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.47s 2025-03-23 13:03:52.089552 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.54s 2025-03-23 13:03:52.089599 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2025-03-23 13:03:52.090430 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.22s 2025-03-23 13:03:52.090863 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.05s 2025-03-23 13:03:52.091229 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-03-23 13:03:52.091254 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-03-23 13:03:52.091274 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-03-23 13:03:52.092670 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-03-23 13:03:52.564229 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-23 13:03:52.573327 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-03-23 13:03:52.573362 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-23 13:03:52.636914 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-23 13:03:52.641978 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-23 13:03:52.642006 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-03-23 13:03:52.642072 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-23 13:03:52.646010 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-23 13:03:52.652298 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-23 13:03:54.206824 | orchestrator | 2025-03-23 13:03:54 | INFO  | Task 27bd7c49-6c10-4ee4-b6c1-c555fb50c338 (operator) was prepared for execution. 2025-03-23 13:03:57.420706 | orchestrator | 2025-03-23 13:03:54 | INFO  | It takes a moment until task 27bd7c49-6c10-4ee4-b6c1-c555fb50c338 (operator) has been started and output is visible here. 2025-03-23 13:03:57.420840 | orchestrator | 2025-03-23 13:03:57.421821 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-23 13:03:57.425306 | orchestrator | 2025-03-23 13:03:57.425961 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-23 13:03:57.426734 | orchestrator | Sunday 23 March 2025 13:03:57 +0000 (0:00:00.100) 0:00:00.100 ********** 2025-03-23 13:04:00.932100 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:04:00.932361 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:00.933276 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:04:00.933840 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:00.933870 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:00.937435 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:04:00.937761 | orchestrator | 2025-03-23 13:04:00.938225 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-23 13:04:00.938784 | orchestrator | Sunday 23 March 2025 13:04:00 +0000 (0:00:03.508) 0:00:03.609 ********** 2025-03-23 13:04:01.818519 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:04:01.819673 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:01.821288 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:01.821991 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:04:01.823021 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:04:01.824067 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:01.825904 | orchestrator | 2025-03-23 13:04:01.826298 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-23 13:04:01.826806 | orchestrator | 2025-03-23 13:04:01.827579 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-23 13:04:01.827956 | orchestrator | Sunday 23 March 2025 13:04:01 +0000 (0:00:00.886) 0:00:04.496 ********** 2025-03-23 13:04:01.893697 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:04:01.920362 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:04:01.954712 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:04:02.012359 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:02.014416 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:02.034071 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:02.038502 | orchestrator | 2025-03-23 13:04:02.038537 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-23 13:04:02.123542 | orchestrator | Sunday 23 March 2025 13:04:02 +0000 (0:00:00.197) 0:00:04.693 ********** 2025-03-23 13:04:02.123659 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:04:02.143548 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:04:02.164934 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:04:02.222188 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:02.223433 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:02.226304 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:02.226775 | orchestrator | 2025-03-23 13:04:02.226801 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-23 13:04:02.226820 | orchestrator | Sunday 23 March 2025 13:04:02 +0000 (0:00:00.210) 0:00:04.903 ********** 2025-03-23 13:04:03.070909 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:03.071297 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:03.073120 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:03.076327 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:03.077068 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:03.077851 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:03.078531 | orchestrator | 2025-03-23 13:04:03.079314 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-23 13:04:03.080106 | orchestrator | Sunday 23 March 2025 13:04:03 +0000 (0:00:00.847) 0:00:05.751 ********** 2025-03-23 13:04:03.886297 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:03.887147 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:03.887653 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:03.890121 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:03.890964 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:03.891105 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:03.891840 | orchestrator | 2025-03-23 13:04:03.892259 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-23 13:04:03.894153 | orchestrator | Sunday 23 March 2025 13:04:03 +0000 (0:00:00.815) 0:00:06.566 ********** 2025-03-23 13:04:05.089961 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-23 13:04:05.091684 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-23 13:04:05.092689 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-23 13:04:05.092723 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-23 13:04:05.095833 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-23 13:04:05.096645 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-23 13:04:05.096674 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-23 13:04:05.096698 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-23 13:04:05.097501 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-23 13:04:05.098335 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-23 13:04:05.100919 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-23 13:04:05.101668 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-23 13:04:05.101784 | orchestrator | 2025-03-23 13:04:05.102301 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-23 13:04:05.103190 | orchestrator | Sunday 23 March 2025 13:04:05 +0000 (0:00:01.200) 0:00:07.767 ********** 2025-03-23 13:04:06.525838 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:06.526587 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:06.529467 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:06.530920 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:06.530953 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:06.530969 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:06.530989 | orchestrator | 2025-03-23 13:04:06.531920 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-23 13:04:06.532892 | orchestrator | Sunday 23 March 2025 13:04:06 +0000 (0:00:01.438) 0:00:09.205 ********** 2025-03-23 13:04:07.815845 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-23 13:04:07.889703 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-23 13:04:07.889797 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-23 13:04:07.889828 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.894184 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.895700 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.895726 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.895740 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.895753 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-23 13:04:07.895772 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.896005 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.896364 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.896758 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.897096 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.897495 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-23 13:04:07.900387 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:07.901267 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:07.901305 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:08.588754 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:08.588874 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:08.588891 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-23 13:04:08.588905 | orchestrator | 2025-03-23 13:04:08.588919 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-23 13:04:08.588931 | orchestrator | Sunday 23 March 2025 13:04:07 +0000 (0:00:01.365) 0:00:10.570 ********** 2025-03-23 13:04:08.588977 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:08.589847 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:08.591291 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:08.595020 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:08.595845 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:08.596670 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:08.597099 | orchestrator | 2025-03-23 13:04:08.597811 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-23 13:04:08.598059 | orchestrator | Sunday 23 March 2025 13:04:08 +0000 (0:00:00.698) 0:00:11.268 ********** 2025-03-23 13:04:08.654658 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:04:08.679806 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:04:08.704041 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:04:08.756251 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:08.756602 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:08.756944 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:08.757509 | orchestrator | 2025-03-23 13:04:08.757745 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-23 13:04:08.758229 | orchestrator | Sunday 23 March 2025 13:04:08 +0000 (0:00:00.169) 0:00:11.438 ********** 2025-03-23 13:04:09.530344 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-23 13:04:09.532339 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:09.532653 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:04:09.533768 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:09.534714 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-23 13:04:09.535447 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:04:09.536122 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:09.536715 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:09.537450 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:04:09.537922 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:04:09.538556 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:09.539243 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:09.539962 | orchestrator | 2025-03-23 13:04:09.540513 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-23 13:04:09.541005 | orchestrator | Sunday 23 March 2025 13:04:09 +0000 (0:00:00.771) 0:00:12.210 ********** 2025-03-23 13:04:09.586134 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:04:09.607389 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:04:09.636085 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:04:09.660573 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:09.705141 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:09.705787 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:09.706483 | orchestrator | 2025-03-23 13:04:09.707307 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-23 13:04:09.707961 | orchestrator | Sunday 23 March 2025 13:04:09 +0000 (0:00:00.176) 0:00:12.387 ********** 2025-03-23 13:04:09.761522 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:04:09.788703 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:04:09.839592 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:04:09.892489 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:09.897191 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:09.898127 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:09.898956 | orchestrator | 2025-03-23 13:04:09.900161 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-23 13:04:09.901156 | orchestrator | Sunday 23 March 2025 13:04:09 +0000 (0:00:00.185) 0:00:12.572 ********** 2025-03-23 13:04:09.967223 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:04:10.002425 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:04:10.030246 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:04:10.065565 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:10.069380 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:10.069844 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:10.072840 | orchestrator | 2025-03-23 13:04:10.072878 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-23 13:04:10.072900 | orchestrator | Sunday 23 March 2025 13:04:10 +0000 (0:00:00.172) 0:00:12.745 ********** 2025-03-23 13:04:10.790274 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:10.790800 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:10.790827 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:10.790846 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:10.791880 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:10.794198 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:10.794440 | orchestrator | 2025-03-23 13:04:10.794464 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-23 13:04:10.794482 | orchestrator | Sunday 23 March 2025 13:04:10 +0000 (0:00:00.725) 0:00:13.471 ********** 2025-03-23 13:04:10.868381 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:04:10.894221 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:04:10.924325 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:04:11.048667 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:11.048877 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:11.051884 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:11.052762 | orchestrator | 2025-03-23 13:04:11.053597 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:04:11.053851 | orchestrator | 2025-03-23 13:04:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:04:11.057971 | orchestrator | 2025-03-23 13:04:11 | INFO  | Please wait and do not abort execution. 2025-03-23 13:04:11.057984 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.058281 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.059010 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.059275 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.059640 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.060106 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:04:11.061058 | orchestrator | 2025-03-23 13:04:11.061251 | orchestrator | Sunday 23 March 2025 13:04:11 +0000 (0:00:00.257) 0:00:13.729 ********** 2025-03-23 13:04:11.061704 | orchestrator | =============================================================================== 2025-03-23 13:04:11.062152 | orchestrator | Gathering Facts --------------------------------------------------------- 3.51s 2025-03-23 13:04:11.062535 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.44s 2025-03-23 13:04:11.063689 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2025-03-23 13:04:11.063955 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-03-23 13:04:11.063983 | orchestrator | Do not require tty for all users ---------------------------------------- 0.89s 2025-03-23 13:04:11.064726 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.85s 2025-03-23 13:04:11.065071 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-03-23 13:04:11.068240 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.77s 2025-03-23 13:04:11.071524 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2025-03-23 13:04:11.072809 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.70s 2025-03-23 13:04:11.072835 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-03-23 13:04:11.072847 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2025-03-23 13:04:11.072860 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-03-23 13:04:11.072878 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-03-23 13:04:11.073387 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-03-23 13:04:11.073425 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-03-23 13:04:11.073442 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-03-23 13:04:11.512951 | orchestrator | + osism apply --environment custom facts 2025-03-23 13:04:12.977261 | orchestrator | 2025-03-23 13:04:12 | INFO  | Trying to run play facts in environment custom 2025-03-23 13:04:13.029760 | orchestrator | 2025-03-23 13:04:13 | INFO  | Task c1a3e487-e415-4e6c-bee7-df19e8630082 (facts) was prepared for execution. 2025-03-23 13:04:15.345996 | orchestrator | 2025-03-23 13:04:13 | INFO  | It takes a moment until task c1a3e487-e415-4e6c-bee7-df19e8630082 (facts) has been started and output is visible here. 2025-03-23 13:04:15.346243 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-03-23 13:04:15.346331 | orchestrator | -vvvv to see details 2025-03-23 13:04:15.861988 | orchestrator | 2025-03-23 13:04:15.862708 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-23 13:04:15.864046 | orchestrator | 2025-03-23 13:04:15.865490 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-23 13:04:16.489193 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.490834 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.493493 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.495417 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.496409 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.496791 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.497686 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-03-23 13:04:16.498510 | orchestrator | 2025-03-23 13:04:16.499206 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:04:16.499676 | orchestrator | 2025-03-23 13:04:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:04:16.499764 | orchestrator | 2025-03-23 13:04:16 | INFO  | Please wait and do not abort execution. 2025-03-23 13:04:16.500762 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.501379 | orchestrator | testbed-node-0 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.501836 | orchestrator | testbed-node-1 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.502488 | orchestrator | testbed-node-2 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.502825 | orchestrator | testbed-node-3 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.504304 | orchestrator | testbed-node-4 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.504739 | orchestrator | testbed-node-5 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:04:16.505859 | orchestrator | 2025-03-23 13:04:16.704495 | orchestrator | 2025-03-23 13:04:16 | INFO  | Trying to run play facts in environment custom 2025-03-23 13:04:16.709881 | orchestrator | 2025-03-23 13:04:16 | INFO  | Task baed240c-b65a-486d-8a8f-b08ef57d5a6b (facts) was prepared for execution. 2025-03-23 13:04:19.903862 | orchestrator | 2025-03-23 13:04:16 | INFO  | It takes a moment until task baed240c-b65a-486d-8a8f-b08ef57d5a6b (facts) has been started and output is visible here. 2025-03-23 13:04:19.903996 | orchestrator | 2025-03-23 13:04:19.904515 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-23 13:04:19.907307 | orchestrator | 2025-03-23 13:04:19.907341 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-23 13:04:19.907716 | orchestrator | Sunday 23 March 2025 13:04:19 +0000 (0:00:00.102) 0:00:00.102 ********** 2025-03-23 13:04:21.104194 | orchestrator | ok: [testbed-manager] 2025-03-23 13:04:22.282592 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:22.285539 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:22.288190 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:22.290660 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:22.291970 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:22.292756 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:22.294448 | orchestrator | 2025-03-23 13:04:22.294988 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-23 13:04:22.298107 | orchestrator | Sunday 23 March 2025 13:04:22 +0000 (0:00:02.379) 0:00:02.481 ********** 2025-03-23 13:04:23.507666 | orchestrator | ok: [testbed-manager] 2025-03-23 13:04:24.491861 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:24.492349 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:24.493021 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:04:24.494229 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:24.496558 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:04:24.497451 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:04:24.498289 | orchestrator | 2025-03-23 13:04:24.498994 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-23 13:04:24.499372 | orchestrator | 2025-03-23 13:04:24.500233 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-23 13:04:24.501683 | orchestrator | Sunday 23 March 2025 13:04:24 +0000 (0:00:02.211) 0:00:04.693 ********** 2025-03-23 13:04:24.607300 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:24.609321 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:24.610405 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:24.612141 | orchestrator | 2025-03-23 13:04:24.613849 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-23 13:04:24.613989 | orchestrator | Sunday 23 March 2025 13:04:24 +0000 (0:00:00.114) 0:00:04.808 ********** 2025-03-23 13:04:24.748315 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:24.749146 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:24.749182 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:24.749773 | orchestrator | 2025-03-23 13:04:24.750652 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-23 13:04:24.751885 | orchestrator | Sunday 23 March 2025 13:04:24 +0000 (0:00:00.142) 0:00:04.950 ********** 2025-03-23 13:04:24.889195 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:24.889569 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:24.890526 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:24.892235 | orchestrator | 2025-03-23 13:04:24.892817 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-23 13:04:24.894658 | orchestrator | Sunday 23 March 2025 13:04:24 +0000 (0:00:00.141) 0:00:05.091 ********** 2025-03-23 13:04:25.036875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:04:25.039506 | orchestrator | 2025-03-23 13:04:25.040196 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-23 13:04:25.042994 | orchestrator | Sunday 23 March 2025 13:04:25 +0000 (0:00:00.147) 0:00:05.239 ********** 2025-03-23 13:04:25.510997 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:25.513546 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:25.513653 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:25.514867 | orchestrator | 2025-03-23 13:04:25.515729 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-23 13:04:25.516761 | orchestrator | Sunday 23 March 2025 13:04:25 +0000 (0:00:00.470) 0:00:05.710 ********** 2025-03-23 13:04:25.619536 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:25.620728 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:25.622687 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:25.624618 | orchestrator | 2025-03-23 13:04:25.625582 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-23 13:04:25.626774 | orchestrator | Sunday 23 March 2025 13:04:25 +0000 (0:00:00.111) 0:00:05.821 ********** 2025-03-23 13:04:26.721989 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:26.722871 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:26.723489 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:26.723816 | orchestrator | 2025-03-23 13:04:26.725423 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-23 13:04:26.726324 | orchestrator | Sunday 23 March 2025 13:04:26 +0000 (0:00:01.100) 0:00:06.922 ********** 2025-03-23 13:04:27.237889 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:27.238806 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:27.239247 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:27.241893 | orchestrator | 2025-03-23 13:04:27.242239 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-23 13:04:27.242738 | orchestrator | Sunday 23 March 2025 13:04:27 +0000 (0:00:00.516) 0:00:07.439 ********** 2025-03-23 13:04:28.368283 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:28.369516 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:28.370659 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:28.372652 | orchestrator | 2025-03-23 13:04:28.374056 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-23 13:04:28.374833 | orchestrator | Sunday 23 March 2025 13:04:28 +0000 (0:00:01.129) 0:00:08.568 ********** 2025-03-23 13:04:42.298359 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:42.298527 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:42.298550 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:42.300813 | orchestrator | 2025-03-23 13:04:42.301105 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-23 13:04:42.302471 | orchestrator | Sunday 23 March 2025 13:04:42 +0000 (0:00:13.926) 0:00:22.494 ********** 2025-03-23 13:04:42.387784 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:04:42.388928 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:04:42.389745 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:04:42.393710 | orchestrator | 2025-03-23 13:04:42.394220 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-23 13:04:42.395147 | orchestrator | Sunday 23 March 2025 13:04:42 +0000 (0:00:00.095) 0:00:22.590 ********** 2025-03-23 13:04:50.126147 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:04:50.126338 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:04:50.126828 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:04:50.128429 | orchestrator | 2025-03-23 13:04:50.132330 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-23 13:04:50.133376 | orchestrator | Sunday 23 March 2025 13:04:50 +0000 (0:00:07.735) 0:00:30.326 ********** 2025-03-23 13:04:50.557857 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:50.558071 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:50.558105 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:50.558792 | orchestrator | 2025-03-23 13:04:50.558828 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-23 13:04:50.559059 | orchestrator | Sunday 23 March 2025 13:04:50 +0000 (0:00:00.432) 0:00:30.758 ********** 2025-03-23 13:04:54.159920 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-23 13:04:54.160140 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-23 13:04:54.163769 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-23 13:04:54.166069 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-23 13:04:54.166103 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-23 13:04:54.166118 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-23 13:04:54.166132 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-23 13:04:54.166152 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-23 13:04:54.166259 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-23 13:04:54.166786 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-23 13:04:54.167507 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-23 13:04:54.168177 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-23 13:04:54.169132 | orchestrator | 2025-03-23 13:04:54.169923 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-23 13:04:54.170524 | orchestrator | Sunday 23 March 2025 13:04:54 +0000 (0:00:03.601) 0:00:34.360 ********** 2025-03-23 13:04:55.318095 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:04:55.320177 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:04:55.320493 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:04:55.321435 | orchestrator | 2025-03-23 13:04:55.322108 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-23 13:04:55.322885 | orchestrator | 2025-03-23 13:04:55.323715 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:04:55.324476 | orchestrator | Sunday 23 March 2025 13:04:55 +0000 (0:00:01.157) 0:00:35.518 ********** 2025-03-23 13:04:57.224817 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:00.824013 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:00.824249 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:00.826562 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:00.826674 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:00.826743 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:00.827377 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:00.828658 | orchestrator | 2025-03-23 13:05:00.829534 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:05:00.830067 | orchestrator | 2025-03-23 13:05:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:05:00.830648 | orchestrator | 2025-03-23 13:05:00 | INFO  | Please wait and do not abort execution. 2025-03-23 13:05:00.830683 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:05:00.831613 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:05:00.832074 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:05:00.832938 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:05:00.833871 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:05:00.834154 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:05:00.834544 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:05:00.834928 | orchestrator | 2025-03-23 13:05:00.835821 | orchestrator | Sunday 23 March 2025 13:05:00 +0000 (0:00:05.508) 0:00:41.026 ********** 2025-03-23 13:05:00.836697 | orchestrator | =============================================================================== 2025-03-23 13:05:00.837299 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.93s 2025-03-23 13:05:00.837940 | orchestrator | Install required packages (Debian) -------------------------------------- 7.74s 2025-03-23 13:05:00.838254 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.51s 2025-03-23 13:05:00.838696 | orchestrator | Copy fact files --------------------------------------------------------- 3.60s 2025-03-23 13:05:00.839017 | orchestrator | Create custom facts directory ------------------------------------------- 2.38s 2025-03-23 13:05:00.839327 | orchestrator | Copy fact file ---------------------------------------------------------- 2.21s 2025-03-23 13:05:00.839673 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.16s 2025-03-23 13:05:00.840082 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2025-03-23 13:05:00.840757 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.10s 2025-03-23 13:05:00.841144 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2025-03-23 13:05:00.841919 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-03-23 13:05:00.842532 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-03-23 13:05:00.842999 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-03-23 13:05:00.843031 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.14s 2025-03-23 13:05:00.843248 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-03-23 13:05:00.843358 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-03-23 13:05:00.843747 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-03-23 13:05:00.844193 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-03-23 13:05:01.321020 | orchestrator | + osism apply bootstrap 2025-03-23 13:05:02.855274 | orchestrator | 2025-03-23 13:05:02 | INFO  | Task 0104aa1d-01b4-4b67-aebc-0dc8fde002b6 (bootstrap) was prepared for execution. 2025-03-23 13:05:06.216293 | orchestrator | 2025-03-23 13:05:02 | INFO  | It takes a moment until task 0104aa1d-01b4-4b67-aebc-0dc8fde002b6 (bootstrap) has been started and output is visible here. 2025-03-23 13:05:06.216441 | orchestrator | 2025-03-23 13:05:06.219818 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-23 13:05:06.221816 | orchestrator | 2025-03-23 13:05:06.223014 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-23 13:05:06.223238 | orchestrator | Sunday 23 March 2025 13:05:06 +0000 (0:00:00.110) 0:00:00.110 ********** 2025-03-23 13:05:06.293843 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:06.324093 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:06.353965 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:06.384557 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:06.479882 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:06.481640 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:06.482520 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:06.485191 | orchestrator | 2025-03-23 13:05:06.485258 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-23 13:05:06.485308 | orchestrator | 2025-03-23 13:05:06.485803 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:05:06.486374 | orchestrator | Sunday 23 March 2025 13:05:06 +0000 (0:00:00.266) 0:00:00.376 ********** 2025-03-23 13:05:10.529763 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:10.531362 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:10.531842 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:10.531908 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:10.532331 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:10.533739 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:10.535041 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:10.535077 | orchestrator | 2025-03-23 13:05:10.535805 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-23 13:05:10.536312 | orchestrator | 2025-03-23 13:05:10.536777 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:05:10.537107 | orchestrator | Sunday 23 March 2025 13:05:10 +0000 (0:00:04.049) 0:00:04.426 ********** 2025-03-23 13:05:10.629327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-23 13:05:10.686486 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-23 13:05:10.735443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:05:10.735636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-23 13:05:10.735656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:05:10.735671 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-23 13:05:10.735685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:05:10.735700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:05:10.735732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:05:10.735757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:05:10.735825 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-23 13:05:10.735846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:05:10.736122 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-23 13:05:10.736408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:05:10.736770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-23 13:05:10.737090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:05:10.737507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:05:11.017803 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:11.020357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-23 13:05:11.024069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:05:11.024227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:05:11.024259 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-23 13:05:11.025480 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:05:11.027282 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:11.027347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-23 13:05:11.028089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:05:11.030578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:05:11.030971 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:05:11.032847 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-23 13:05:11.033518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:05:11.034787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:05:11.035871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:05:11.037097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-23 13:05:11.038225 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:05:11.039250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:05:11.040267 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-23 13:05:11.041156 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:11.042231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:05:11.042690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-23 13:05:11.043413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:05:11.044260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:05:11.044963 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:05:11.045370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:05:11.045862 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-23 13:05:11.046275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:05:11.046444 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:05:11.046925 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:11.047833 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:05:11.048879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-23 13:05:11.048925 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:11.049705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:05:11.050618 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:11.051452 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-23 13:05:11.052344 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-23 13:05:11.052816 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-23 13:05:11.054573 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:11.055200 | orchestrator | 2025-03-23 13:05:11.056284 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-23 13:05:11.057034 | orchestrator | 2025-03-23 13:05:11.059385 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-23 13:05:11.060067 | orchestrator | Sunday 23 March 2025 13:05:11 +0000 (0:00:00.487) 0:00:04.914 ********** 2025-03-23 13:05:11.131354 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:11.153988 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:11.177508 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:11.209659 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:11.293742 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:11.294696 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:11.296125 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:11.297279 | orchestrator | 2025-03-23 13:05:11.298113 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-23 13:05:11.299071 | orchestrator | Sunday 23 March 2025 13:05:11 +0000 (0:00:00.275) 0:00:05.189 ********** 2025-03-23 13:05:12.632678 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:12.634664 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:12.635368 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:12.636685 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:12.637800 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:12.638556 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:12.639382 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:12.640242 | orchestrator | 2025-03-23 13:05:12.641138 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-23 13:05:12.641349 | orchestrator | Sunday 23 March 2025 13:05:12 +0000 (0:00:01.338) 0:00:06.528 ********** 2025-03-23 13:05:13.985427 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:13.987407 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:13.988246 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:13.988344 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:13.990251 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:13.990838 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:13.990887 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:13.991873 | orchestrator | 2025-03-23 13:05:13.992531 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-23 13:05:13.993511 | orchestrator | Sunday 23 March 2025 13:05:13 +0000 (0:00:01.351) 0:00:07.879 ********** 2025-03-23 13:05:14.293102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:14.294167 | orchestrator | 2025-03-23 13:05:14.298847 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-23 13:05:14.299905 | orchestrator | Sunday 23 March 2025 13:05:14 +0000 (0:00:00.309) 0:00:08.188 ********** 2025-03-23 13:05:16.569629 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:16.571546 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:16.572025 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:16.572682 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:16.573646 | orchestrator | changed: [testbed-manager] 2025-03-23 13:05:16.574420 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:16.574801 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:16.575327 | orchestrator | 2025-03-23 13:05:16.575916 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-23 13:05:16.577018 | orchestrator | Sunday 23 March 2025 13:05:16 +0000 (0:00:02.271) 0:00:10.460 ********** 2025-03-23 13:05:16.663300 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:16.898170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:16.898860 | orchestrator | 2025-03-23 13:05:16.898896 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-23 13:05:16.899531 | orchestrator | Sunday 23 March 2025 13:05:16 +0000 (0:00:00.333) 0:00:10.794 ********** 2025-03-23 13:05:18.052491 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:18.053234 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:18.055535 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:18.055970 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:18.057639 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:18.058577 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:18.059302 | orchestrator | 2025-03-23 13:05:18.060441 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-23 13:05:18.061138 | orchestrator | Sunday 23 March 2025 13:05:18 +0000 (0:00:01.154) 0:00:11.948 ********** 2025-03-23 13:05:18.130810 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:18.677171 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:18.677302 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:18.678746 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:18.679668 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:18.681172 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:18.681434 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:18.681463 | orchestrator | 2025-03-23 13:05:18.681683 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-23 13:05:18.681819 | orchestrator | Sunday 23 March 2025 13:05:18 +0000 (0:00:00.624) 0:00:12.572 ********** 2025-03-23 13:05:18.805903 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:18.834796 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:18.861353 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:19.158255 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:19.158666 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:19.159008 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:19.161666 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:19.165097 | orchestrator | 2025-03-23 13:05:19.165162 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-23 13:05:19.166420 | orchestrator | Sunday 23 March 2025 13:05:19 +0000 (0:00:00.479) 0:00:13.052 ********** 2025-03-23 13:05:19.244715 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:19.270389 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:19.303882 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:19.343203 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:19.425246 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:19.425488 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:19.425551 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:19.425984 | orchestrator | 2025-03-23 13:05:19.426231 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-23 13:05:19.426654 | orchestrator | Sunday 23 March 2025 13:05:19 +0000 (0:00:00.269) 0:00:13.322 ********** 2025-03-23 13:05:19.779356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:19.783449 | orchestrator | 2025-03-23 13:05:20.121165 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-23 13:05:20.121245 | orchestrator | Sunday 23 March 2025 13:05:19 +0000 (0:00:00.352) 0:00:13.674 ********** 2025-03-23 13:05:20.121274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:20.123375 | orchestrator | 2025-03-23 13:05:21.413419 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-23 13:05:21.413547 | orchestrator | Sunday 23 March 2025 13:05:20 +0000 (0:00:00.342) 0:00:14.016 ********** 2025-03-23 13:05:21.413642 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:21.414220 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:21.415280 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:21.416729 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:21.417322 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:21.418547 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:21.419658 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:21.420608 | orchestrator | 2025-03-23 13:05:21.421211 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-23 13:05:21.422118 | orchestrator | Sunday 23 March 2025 13:05:21 +0000 (0:00:01.290) 0:00:15.307 ********** 2025-03-23 13:05:21.493613 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:21.524038 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:21.559536 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:21.587923 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:21.676093 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:21.676217 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:21.677122 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:21.678411 | orchestrator | 2025-03-23 13:05:21.679054 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-23 13:05:21.679763 | orchestrator | Sunday 23 March 2025 13:05:21 +0000 (0:00:00.265) 0:00:15.572 ********** 2025-03-23 13:05:22.242234 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:22.248439 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:22.362471 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:22.362510 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:22.362525 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:22.362540 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:22.362554 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:22.362568 | orchestrator | 2025-03-23 13:05:22.362619 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-23 13:05:22.362637 | orchestrator | Sunday 23 March 2025 13:05:22 +0000 (0:00:00.564) 0:00:16.137 ********** 2025-03-23 13:05:22.362658 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:22.392572 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:22.421428 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:22.504414 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:22.505213 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:22.505984 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:22.506458 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:22.507395 | orchestrator | 2025-03-23 13:05:22.508214 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-23 13:05:22.508994 | orchestrator | Sunday 23 March 2025 13:05:22 +0000 (0:00:00.264) 0:00:16.401 ********** 2025-03-23 13:05:23.062563 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:23.063367 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:23.063733 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:23.065298 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:23.065692 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:23.067059 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:23.067249 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:23.067853 | orchestrator | 2025-03-23 13:05:23.068484 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-23 13:05:23.068927 | orchestrator | Sunday 23 March 2025 13:05:23 +0000 (0:00:00.555) 0:00:16.957 ********** 2025-03-23 13:05:24.306402 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:24.308529 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:24.308565 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:24.308620 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:24.312661 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:24.313115 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:24.313144 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:24.313903 | orchestrator | 2025-03-23 13:05:24.315103 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-23 13:05:24.315480 | orchestrator | Sunday 23 March 2025 13:05:24 +0000 (0:00:01.242) 0:00:18.199 ********** 2025-03-23 13:05:25.515809 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:25.516401 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:25.516615 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:25.516752 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:25.516818 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:25.517297 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:25.517772 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:25.518708 | orchestrator | 2025-03-23 13:05:25.519061 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-23 13:05:25.520150 | orchestrator | Sunday 23 March 2025 13:05:25 +0000 (0:00:01.206) 0:00:19.406 ********** 2025-03-23 13:05:25.887793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:25.974249 | orchestrator | 2025-03-23 13:05:25.974313 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-23 13:05:25.974329 | orchestrator | Sunday 23 March 2025 13:05:25 +0000 (0:00:00.372) 0:00:19.778 ********** 2025-03-23 13:05:25.974354 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:27.503770 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:27.503919 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:27.504817 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:27.506326 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:27.509393 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:27.510514 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:27.510548 | orchestrator | 2025-03-23 13:05:27.511876 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-23 13:05:27.512821 | orchestrator | Sunday 23 March 2025 13:05:27 +0000 (0:00:01.619) 0:00:21.398 ********** 2025-03-23 13:05:27.584734 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:27.611004 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:27.642361 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:27.670862 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:27.739207 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:27.739307 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:27.740007 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:27.742060 | orchestrator | 2025-03-23 13:05:27.742302 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-23 13:05:27.742642 | orchestrator | Sunday 23 March 2025 13:05:27 +0000 (0:00:00.237) 0:00:21.635 ********** 2025-03-23 13:05:27.814839 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:27.846844 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:27.871257 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:27.909390 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:27.991320 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:27.996452 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:27.997092 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:27.997800 | orchestrator | 2025-03-23 13:05:28.001397 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-23 13:05:28.003251 | orchestrator | Sunday 23 March 2025 13:05:27 +0000 (0:00:00.250) 0:00:21.886 ********** 2025-03-23 13:05:28.086138 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:28.113906 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:28.144529 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:28.174693 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:28.250192 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:28.250636 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:28.251387 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:28.252493 | orchestrator | 2025-03-23 13:05:28.253449 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-23 13:05:28.254122 | orchestrator | Sunday 23 March 2025 13:05:28 +0000 (0:00:00.260) 0:00:22.146 ********** 2025-03-23 13:05:28.624113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:28.624629 | orchestrator | 2025-03-23 13:05:28.624672 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-23 13:05:28.625822 | orchestrator | Sunday 23 March 2025 13:05:28 +0000 (0:00:00.372) 0:00:22.519 ********** 2025-03-23 13:05:29.180577 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:29.183026 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:29.183775 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:29.183817 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:29.185776 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:29.186759 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:29.187442 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:29.188205 | orchestrator | 2025-03-23 13:05:29.188701 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-23 13:05:29.189736 | orchestrator | Sunday 23 March 2025 13:05:29 +0000 (0:00:00.555) 0:00:23.075 ********** 2025-03-23 13:05:29.257773 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:29.294418 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:29.327041 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:29.366251 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:29.459430 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:29.460833 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:29.461792 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:29.462731 | orchestrator | 2025-03-23 13:05:29.463646 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-23 13:05:29.464731 | orchestrator | Sunday 23 March 2025 13:05:29 +0000 (0:00:00.279) 0:00:23.355 ********** 2025-03-23 13:05:30.673845 | orchestrator | changed: [testbed-manager] 2025-03-23 13:05:30.674611 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:30.675794 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:30.676545 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:30.677388 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:30.678223 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:30.678985 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:30.679873 | orchestrator | 2025-03-23 13:05:30.680652 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-23 13:05:30.681368 | orchestrator | Sunday 23 March 2025 13:05:30 +0000 (0:00:01.213) 0:00:24.568 ********** 2025-03-23 13:05:31.297906 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:31.299128 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:31.301042 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:31.302137 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:31.303317 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:31.304550 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:31.305695 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:31.306826 | orchestrator | 2025-03-23 13:05:31.307809 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-23 13:05:31.308792 | orchestrator | Sunday 23 March 2025 13:05:31 +0000 (0:00:00.624) 0:00:25.192 ********** 2025-03-23 13:05:32.488194 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:32.490535 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:32.492177 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:32.492214 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:32.492876 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:32.493382 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:32.493738 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:32.494453 | orchestrator | 2025-03-23 13:05:32.496232 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-23 13:05:46.357186 | orchestrator | Sunday 23 March 2025 13:05:32 +0000 (0:00:01.189) 0:00:26.382 ********** 2025-03-23 13:05:46.357330 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:46.359674 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:46.359798 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:46.360190 | orchestrator | changed: [testbed-manager] 2025-03-23 13:05:46.360222 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:46.360654 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:46.361263 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:46.363574 | orchestrator | 2025-03-23 13:05:46.443929 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-23 13:05:46.443990 | orchestrator | Sunday 23 March 2025 13:05:46 +0000 (0:00:13.868) 0:00:40.250 ********** 2025-03-23 13:05:46.444015 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:46.473527 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:46.505308 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:46.532101 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:46.597371 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:46.598496 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:46.599833 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:46.600041 | orchestrator | 2025-03-23 13:05:46.600566 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-23 13:05:46.601224 | orchestrator | Sunday 23 March 2025 13:05:46 +0000 (0:00:00.242) 0:00:40.493 ********** 2025-03-23 13:05:46.688599 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:46.719338 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:46.757493 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:46.786368 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:46.863994 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:46.864814 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:46.865529 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:46.866303 | orchestrator | 2025-03-23 13:05:46.866968 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-23 13:05:46.867474 | orchestrator | Sunday 23 March 2025 13:05:46 +0000 (0:00:00.264) 0:00:40.758 ********** 2025-03-23 13:05:46.979089 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:47.006662 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:47.054710 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:47.085269 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:47.169158 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:47.169738 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:47.170903 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:47.171694 | orchestrator | 2025-03-23 13:05:47.172126 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-23 13:05:47.172211 | orchestrator | Sunday 23 March 2025 13:05:47 +0000 (0:00:00.307) 0:00:41.066 ********** 2025-03-23 13:05:47.518225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:47.519757 | orchestrator | 2025-03-23 13:05:47.521093 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-23 13:05:47.521706 | orchestrator | Sunday 23 March 2025 13:05:47 +0000 (0:00:00.346) 0:00:41.413 ********** 2025-03-23 13:05:49.305368 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:49.306076 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:49.306118 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:49.306490 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:49.306916 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:49.307464 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:49.308548 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:49.309166 | orchestrator | 2025-03-23 13:05:49.309561 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-23 13:05:49.310092 | orchestrator | Sunday 23 March 2025 13:05:49 +0000 (0:00:01.786) 0:00:43.199 ********** 2025-03-23 13:05:50.554299 | orchestrator | changed: [testbed-manager] 2025-03-23 13:05:50.554450 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:50.556148 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:50.556430 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:50.557290 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:50.558887 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:50.559274 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:50.560184 | orchestrator | 2025-03-23 13:05:50.560617 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-23 13:05:50.562567 | orchestrator | Sunday 23 March 2025 13:05:50 +0000 (0:00:01.249) 0:00:44.448 ********** 2025-03-23 13:05:51.527184 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:05:51.527408 | orchestrator | ok: [testbed-manager] 2025-03-23 13:05:51.528112 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:05:51.530648 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:05:51.531536 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:05:51.532059 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:05:51.534395 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:05:51.534462 | orchestrator | 2025-03-23 13:05:51.534882 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-23 13:05:51.535746 | orchestrator | Sunday 23 March 2025 13:05:51 +0000 (0:00:00.973) 0:00:45.422 ********** 2025-03-23 13:05:51.861091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:05:51.861252 | orchestrator | 2025-03-23 13:05:51.864393 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-23 13:05:52.973201 | orchestrator | Sunday 23 March 2025 13:05:51 +0000 (0:00:00.333) 0:00:45.755 ********** 2025-03-23 13:05:52.973329 | orchestrator | changed: [testbed-manager] 2025-03-23 13:05:52.973406 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:05:52.977568 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:05:52.980465 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:05:52.980493 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:05:52.980508 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:05:52.980523 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:05:52.980538 | orchestrator | 2025-03-23 13:05:52.980553 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-23 13:05:52.980620 | orchestrator | Sunday 23 March 2025 13:05:52 +0000 (0:00:01.110) 0:00:46.866 ********** 2025-03-23 13:05:53.056488 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:05:53.139748 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:05:53.178413 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:05:53.363740 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:05:53.364470 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:05:53.364973 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:05:53.366429 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:05:53.367697 | orchestrator | 2025-03-23 13:05:53.367766 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-23 13:05:53.368716 | orchestrator | Sunday 23 March 2025 13:05:53 +0000 (0:00:00.393) 0:00:47.259 ********** 2025-03-23 13:06:06.941168 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:06:06.941333 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:06:06.941362 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:06:06.943236 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:06:06.943855 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:06:06.944528 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:06:06.945249 | orchestrator | changed: [testbed-manager] 2025-03-23 13:06:06.945730 | orchestrator | 2025-03-23 13:06:06.946679 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-23 13:06:06.946954 | orchestrator | Sunday 23 March 2025 13:06:06 +0000 (0:00:13.573) 0:01:00.832 ********** 2025-03-23 13:06:07.694070 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:07.694665 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:07.694702 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:07.694720 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:07.694743 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:07.695508 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:07.695533 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:07.695552 | orchestrator | 2025-03-23 13:06:07.695686 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-23 13:06:07.695793 | orchestrator | Sunday 23 March 2025 13:06:07 +0000 (0:00:00.754) 0:01:01.587 ********** 2025-03-23 13:06:08.661165 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:08.661842 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:08.666772 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:08.671064 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:08.672279 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:08.681717 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:08.682378 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:08.682410 | orchestrator | 2025-03-23 13:06:08.684180 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-23 13:06:08.685173 | orchestrator | Sunday 23 March 2025 13:06:08 +0000 (0:00:00.969) 0:01:02.557 ********** 2025-03-23 13:06:08.742339 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:08.781228 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:08.813134 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:08.853306 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:08.921434 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:08.922914 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:08.923774 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:08.925048 | orchestrator | 2025-03-23 13:06:08.926412 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-23 13:06:08.927711 | orchestrator | Sunday 23 March 2025 13:06:08 +0000 (0:00:00.260) 0:01:02.817 ********** 2025-03-23 13:06:09.007262 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:09.036476 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:09.069106 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:09.098557 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:09.180017 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:09.182853 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:09.183972 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:09.185878 | orchestrator | 2025-03-23 13:06:09.186762 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-23 13:06:09.187808 | orchestrator | Sunday 23 March 2025 13:06:09 +0000 (0:00:00.257) 0:01:03.075 ********** 2025-03-23 13:06:09.509393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:06:09.511377 | orchestrator | 2025-03-23 13:06:09.513871 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-23 13:06:09.514806 | orchestrator | Sunday 23 March 2025 13:06:09 +0000 (0:00:00.330) 0:01:03.405 ********** 2025-03-23 13:06:11.090332 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:11.090484 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:11.090505 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:11.090526 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:11.091715 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:11.094343 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:11.095099 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:11.096172 | orchestrator | 2025-03-23 13:06:11.097211 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-23 13:06:11.097497 | orchestrator | Sunday 23 March 2025 13:06:11 +0000 (0:00:01.576) 0:01:04.982 ********** 2025-03-23 13:06:11.719805 | orchestrator | changed: [testbed-manager] 2025-03-23 13:06:11.720788 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:06:11.721353 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:06:11.722292 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:06:11.723075 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:06:11.723683 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:06:11.724510 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:06:11.725056 | orchestrator | 2025-03-23 13:06:11.725528 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-23 13:06:11.726087 | orchestrator | Sunday 23 March 2025 13:06:11 +0000 (0:00:00.632) 0:01:05.615 ********** 2025-03-23 13:06:11.804312 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:11.837397 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:11.862677 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:11.897809 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:11.968820 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:11.968973 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:11.968997 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:11.970195 | orchestrator | 2025-03-23 13:06:11.970411 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-23 13:06:11.971670 | orchestrator | Sunday 23 March 2025 13:06:11 +0000 (0:00:00.250) 0:01:05.865 ********** 2025-03-23 13:06:13.164272 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:13.167467 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:13.168765 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:13.168794 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:13.170303 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:13.171142 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:13.172261 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:13.172979 | orchestrator | 2025-03-23 13:06:13.173974 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-23 13:06:13.175125 | orchestrator | Sunday 23 March 2025 13:06:13 +0000 (0:00:01.193) 0:01:07.058 ********** 2025-03-23 13:06:14.814223 | orchestrator | changed: [testbed-manager] 2025-03-23 13:06:14.816392 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:06:14.816975 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:06:14.817005 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:06:14.817847 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:06:14.818793 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:06:14.819299 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:06:14.820126 | orchestrator | 2025-03-23 13:06:14.820734 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-23 13:06:14.821428 | orchestrator | Sunday 23 March 2025 13:06:14 +0000 (0:00:01.651) 0:01:08.709 ********** 2025-03-23 13:06:17.237823 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:17.238412 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:17.240718 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:17.242285 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:17.243426 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:17.245612 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:17.246405 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:17.247541 | orchestrator | 2025-03-23 13:06:17.247760 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-23 13:06:17.248621 | orchestrator | Sunday 23 March 2025 13:06:17 +0000 (0:00:02.422) 0:01:11.132 ********** 2025-03-23 13:06:56.072535 | orchestrator | ok: [testbed-manager] 2025-03-23 13:06:56.073685 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:06:56.073717 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:06:56.073729 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:06:56.073739 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:06:56.073750 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:06:56.073760 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:06:56.073776 | orchestrator | 2025-03-23 13:06:56.074288 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-23 13:06:56.075387 | orchestrator | Sunday 23 March 2025 13:06:56 +0000 (0:00:38.825) 0:01:49.958 ********** 2025-03-23 13:08:15.573018 | orchestrator | changed: [testbed-manager] 2025-03-23 13:08:15.573472 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:08:15.573504 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:08:15.573524 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:08:15.574590 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:08:15.575020 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:08:15.575304 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:08:15.576917 | orchestrator | 2025-03-23 13:08:15.577541 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-23 13:08:15.579914 | orchestrator | Sunday 23 March 2025 13:08:15 +0000 (0:01:19.505) 0:03:09.463 ********** 2025-03-23 13:08:17.315330 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:17.315493 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:17.317247 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:17.319029 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:17.319706 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:17.320729 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:17.321529 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:17.322055 | orchestrator | 2025-03-23 13:08:17.322602 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-23 13:08:17.323584 | orchestrator | Sunday 23 March 2025 13:08:17 +0000 (0:00:01.744) 0:03:11.208 ********** 2025-03-23 13:08:31.503297 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:31.504459 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:31.504494 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:31.504506 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:31.504525 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:31.505191 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:31.505810 | orchestrator | changed: [testbed-manager] 2025-03-23 13:08:31.507369 | orchestrator | 2025-03-23 13:08:31.507834 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-23 13:08:31.507863 | orchestrator | Sunday 23 March 2025 13:08:31 +0000 (0:00:14.183) 0:03:25.391 ********** 2025-03-23 13:08:31.896587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-23 13:08:31.896932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-23 13:08:31.897004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-23 13:08:31.897859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-23 13:08:31.902954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-23 13:08:31.904189 | orchestrator | 2025-03-23 13:08:31.904688 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-23 13:08:31.904720 | orchestrator | Sunday 23 March 2025 13:08:31 +0000 (0:00:00.400) 0:03:25.792 ********** 2025-03-23 13:08:31.963472 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-23 13:08:31.963803 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-23 13:08:31.994911 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:32.037088 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-23 13:08:32.038492 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:08:32.080674 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:08:32.081176 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-23 13:08:32.107049 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:08:32.768054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:08:32.769581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:08:32.771061 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:08:32.771892 | orchestrator | 2025-03-23 13:08:32.772995 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-23 13:08:32.774004 | orchestrator | Sunday 23 March 2025 13:08:32 +0000 (0:00:00.869) 0:03:26.661 ********** 2025-03-23 13:08:32.842869 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-23 13:08:32.845024 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-23 13:08:32.845922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-23 13:08:32.846647 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-23 13:08:32.850991 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-23 13:08:32.932608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-23 13:08:32.932705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-23 13:08:32.932723 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-23 13:08:32.932740 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-23 13:08:32.932765 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-23 13:08:32.936215 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-23 13:08:32.940408 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-23 13:08:32.952996 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-23 13:08:32.953042 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-23 13:08:32.954697 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-23 13:08:32.957957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-23 13:08:32.959252 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-23 13:08:32.959284 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-23 13:08:32.961206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-23 13:08:32.961663 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-23 13:08:32.962743 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-23 13:08:32.963817 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-23 13:08:32.964018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-23 13:08:32.983729 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:32.984222 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-23 13:08:32.984994 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-23 13:08:33.051010 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-23 13:08:33.051446 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:08:33.052008 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-23 13:08:33.052439 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-23 13:08:33.053088 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-23 13:08:33.053351 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-23 13:08:33.053869 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-23 13:08:33.054367 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-23 13:08:33.055067 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-23 13:08:33.055354 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-23 13:08:33.055682 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-23 13:08:33.056480 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-23 13:08:33.056602 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-23 13:08:33.057474 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-23 13:08:33.058645 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-23 13:08:33.058679 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-23 13:08:33.082781 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:08:39.901760 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:08:39.902806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-23 13:08:39.903401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-23 13:08:39.904471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-23 13:08:39.907052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-23 13:08:39.907587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-23 13:08:39.909926 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-23 13:08:39.911297 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-23 13:08:39.912024 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-23 13:08:39.913343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-23 13:08:39.914698 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-23 13:08:39.915301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-23 13:08:39.916120 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-23 13:08:39.916820 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-23 13:08:39.917464 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-23 13:08:39.918440 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-23 13:08:39.918676 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-23 13:08:39.918965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-23 13:08:39.919853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-23 13:08:39.920105 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-23 13:08:39.920493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-23 13:08:39.921130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-23 13:08:39.921407 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-23 13:08:39.921791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-23 13:08:39.922390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-23 13:08:39.922771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-23 13:08:39.922935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-23 13:08:39.923893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-23 13:08:39.924208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-23 13:08:39.924573 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-23 13:08:39.925009 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-23 13:08:39.925540 | orchestrator | 2025-03-23 13:08:39.926088 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-23 13:08:39.926718 | orchestrator | Sunday 23 March 2025 13:08:39 +0000 (0:00:07.133) 0:03:33.795 ********** 2025-03-23 13:08:41.552747 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.553195 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.553233 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.553535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.553958 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.556278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.558054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-23 13:08:41.558864 | orchestrator | 2025-03-23 13:08:41.560640 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-23 13:08:41.562095 | orchestrator | Sunday 23 March 2025 13:08:41 +0000 (0:00:01.651) 0:03:35.447 ********** 2025-03-23 13:08:41.612477 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-23 13:08:41.639032 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:41.721118 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-23 13:08:42.060143 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:08:42.060319 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-23 13:08:42.060955 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:08:42.062108 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-23 13:08:42.062451 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:08:42.062476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-23 13:08:42.063067 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-23 13:08:42.063278 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-23 13:08:42.064406 | orchestrator | 2025-03-23 13:08:42.064626 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-23 13:08:42.065360 | orchestrator | Sunday 23 March 2025 13:08:42 +0000 (0:00:00.507) 0:03:35.954 ********** 2025-03-23 13:08:42.117357 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-23 13:08:42.148210 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:42.243842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-23 13:08:43.678406 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:08:43.679526 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-23 13:08:43.680343 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:08:43.680489 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-23 13:08:43.681409 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:08:43.682964 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-23 13:08:43.684064 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-23 13:08:43.685072 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-23 13:08:43.686889 | orchestrator | 2025-03-23 13:08:43.687687 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-23 13:08:43.688721 | orchestrator | Sunday 23 March 2025 13:08:43 +0000 (0:00:01.617) 0:03:37.572 ********** 2025-03-23 13:08:43.730466 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:43.798304 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:08:43.829391 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:08:43.855740 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:08:43.983772 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:08:43.983965 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:08:43.983992 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:08:43.984225 | orchestrator | 2025-03-23 13:08:43.986429 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-23 13:08:43.987309 | orchestrator | Sunday 23 March 2025 13:08:43 +0000 (0:00:00.306) 0:03:37.878 ********** 2025-03-23 13:08:49.766804 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:49.767394 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:49.767873 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:49.768826 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:49.770291 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:49.771454 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:49.771484 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:49.771580 | orchestrator | 2025-03-23 13:08:49.771976 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-23 13:08:49.772732 | orchestrator | Sunday 23 March 2025 13:08:49 +0000 (0:00:05.782) 0:03:43.661 ********** 2025-03-23 13:08:49.861688 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-23 13:08:49.921323 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-23 13:08:49.921412 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:49.922148 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-23 13:08:49.957841 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:08:49.996010 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:08:50.060625 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-23 13:08:50.060673 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-23 13:08:50.061957 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:08:50.132430 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-23 13:08:50.132535 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:08:50.135451 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:08:50.136834 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-23 13:08:50.136862 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:08:50.137572 | orchestrator | 2025-03-23 13:08:50.138776 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-23 13:08:50.139781 | orchestrator | Sunday 23 March 2025 13:08:50 +0000 (0:00:00.365) 0:03:44.027 ********** 2025-03-23 13:08:51.318275 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-23 13:08:51.321411 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-23 13:08:51.322772 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-23 13:08:51.325798 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-23 13:08:51.327147 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-23 13:08:51.328538 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-23 13:08:51.329625 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-23 13:08:51.330528 | orchestrator | 2025-03-23 13:08:51.331462 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-23 13:08:51.332714 | orchestrator | Sunday 23 March 2025 13:08:51 +0000 (0:00:01.183) 0:03:45.211 ********** 2025-03-23 13:08:51.811954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:08:51.812212 | orchestrator | 2025-03-23 13:08:51.812753 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-23 13:08:51.813201 | orchestrator | Sunday 23 March 2025 13:08:51 +0000 (0:00:00.497) 0:03:45.708 ********** 2025-03-23 13:08:53.343321 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:53.343733 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:53.344371 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:53.344490 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:53.345007 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:53.345390 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:53.345832 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:53.346353 | orchestrator | 2025-03-23 13:08:53.346776 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-23 13:08:53.347179 | orchestrator | Sunday 23 March 2025 13:08:53 +0000 (0:00:01.528) 0:03:47.236 ********** 2025-03-23 13:08:54.085518 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:54.085740 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:54.086349 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:54.087269 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:54.087651 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:54.088263 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:54.088821 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:54.089480 | orchestrator | 2025-03-23 13:08:54.090421 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-23 13:08:54.766409 | orchestrator | Sunday 23 March 2025 13:08:54 +0000 (0:00:00.742) 0:03:47.979 ********** 2025-03-23 13:08:54.766518 | orchestrator | changed: [testbed-manager] 2025-03-23 13:08:54.766624 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:08:54.768640 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:08:54.769009 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:08:54.769438 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:08:54.769664 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:08:54.771429 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:08:54.772317 | orchestrator | 2025-03-23 13:08:54.772761 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-23 13:08:54.773681 | orchestrator | Sunday 23 March 2025 13:08:54 +0000 (0:00:00.681) 0:03:48.660 ********** 2025-03-23 13:08:55.412429 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:55.413364 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:55.414306 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:55.414678 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:55.415593 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:55.417019 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:55.418006 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:55.418821 | orchestrator | 2025-03-23 13:08:55.419252 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-23 13:08:55.420228 | orchestrator | Sunday 23 March 2025 13:08:55 +0000 (0:00:00.646) 0:03:49.307 ********** 2025-03-23 13:08:56.474946 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733598.9033217, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.475859 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733597.487996, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.476480 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733598.4455204, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.479363 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733613.1404622, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.482073 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733603.3492484, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.483221 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733596.222705, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.484339 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742733600.3801925, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.485057 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733536.3607697, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.485638 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733627.9254014, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.486196 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733548.9229696, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.486839 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733537.670094, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.487632 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733528.9395165, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.487759 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733537.288947, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.488340 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742733533.6090827, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:08:56.488672 | orchestrator | 2025-03-23 13:08:56.488942 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-23 13:08:56.489390 | orchestrator | Sunday 23 March 2025 13:08:56 +0000 (0:00:01.062) 0:03:50.369 ********** 2025-03-23 13:08:57.656378 | orchestrator | changed: [testbed-manager] 2025-03-23 13:08:57.657261 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:08:57.657762 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:08:57.657790 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:08:57.659079 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:08:57.660078 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:08:57.660716 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:08:57.661161 | orchestrator | 2025-03-23 13:08:57.662326 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-23 13:08:57.663296 | orchestrator | Sunday 23 March 2025 13:08:57 +0000 (0:00:01.180) 0:03:51.549 ********** 2025-03-23 13:08:58.812041 | orchestrator | changed: [testbed-manager] 2025-03-23 13:08:58.812231 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:08:58.813632 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:08:58.814973 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:08:58.816263 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:08:58.817472 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:08:58.818232 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:08:58.819370 | orchestrator | 2025-03-23 13:08:58.820565 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-23 13:08:58.821417 | orchestrator | Sunday 23 March 2025 13:08:58 +0000 (0:00:01.156) 0:03:52.706 ********** 2025-03-23 13:08:58.905580 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:08:58.954073 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:08:58.989352 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:08:59.027156 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:08:59.059398 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:08:59.117585 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:08:59.119681 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:08:59.121499 | orchestrator | 2025-03-23 13:08:59.122365 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-23 13:08:59.123592 | orchestrator | Sunday 23 March 2025 13:08:59 +0000 (0:00:00.306) 0:03:53.013 ********** 2025-03-23 13:08:59.911990 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:08:59.916073 | orchestrator | ok: [testbed-manager] 2025-03-23 13:08:59.918074 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:08:59.919498 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:08:59.919714 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:08:59.920517 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:08:59.921219 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:08:59.921748 | orchestrator | 2025-03-23 13:08:59.922273 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-23 13:08:59.923215 | orchestrator | Sunday 23 March 2025 13:08:59 +0000 (0:00:00.792) 0:03:53.806 ********** 2025-03-23 13:09:00.381041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:09:00.381286 | orchestrator | 2025-03-23 13:09:00.382219 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-23 13:09:00.382912 | orchestrator | Sunday 23 March 2025 13:09:00 +0000 (0:00:00.470) 0:03:54.277 ********** 2025-03-23 13:09:08.735323 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:08.735511 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:09:08.735584 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:09:08.735609 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:09:08.735847 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:09:08.736274 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:09:08.736873 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:09:08.737026 | orchestrator | 2025-03-23 13:09:08.737368 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-23 13:09:08.738064 | orchestrator | Sunday 23 March 2025 13:09:08 +0000 (0:00:08.352) 0:04:02.629 ********** 2025-03-23 13:09:09.983008 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:09.983320 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:09.983934 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:09.987667 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:09.987806 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:09.987829 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:09.987843 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:09.987857 | orchestrator | 2025-03-23 13:09:09.987873 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-23 13:09:09.987893 | orchestrator | Sunday 23 March 2025 13:09:09 +0000 (0:00:01.247) 0:04:03.876 ********** 2025-03-23 13:09:11.315451 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:11.316387 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:11.316460 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:11.316664 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:11.317513 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:11.321389 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:11.728916 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:11.728997 | orchestrator | 2025-03-23 13:09:11.729015 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-23 13:09:11.729031 | orchestrator | Sunday 23 March 2025 13:09:11 +0000 (0:00:01.333) 0:04:05.210 ********** 2025-03-23 13:09:11.729057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:09:11.730091 | orchestrator | 2025-03-23 13:09:11.734426 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-23 13:09:11.737454 | orchestrator | Sunday 23 March 2025 13:09:11 +0000 (0:00:00.414) 0:04:05.625 ********** 2025-03-23 13:09:20.854068 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:09:20.855496 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:09:20.855530 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:09:20.856413 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:09:20.859094 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:09:20.859834 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:09:20.860381 | orchestrator | changed: [testbed-manager] 2025-03-23 13:09:20.860895 | orchestrator | 2025-03-23 13:09:20.861611 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-23 13:09:20.862310 | orchestrator | Sunday 23 March 2025 13:09:20 +0000 (0:00:09.122) 0:04:14.747 ********** 2025-03-23 13:09:21.621145 | orchestrator | changed: [testbed-manager] 2025-03-23 13:09:21.625173 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:09:21.626439 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:09:21.627405 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:09:21.628319 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:09:21.629478 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:09:21.630193 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:09:21.631513 | orchestrator | 2025-03-23 13:09:21.632377 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-23 13:09:21.633205 | orchestrator | Sunday 23 March 2025 13:09:21 +0000 (0:00:00.767) 0:04:15.515 ********** 2025-03-23 13:09:22.931948 | orchestrator | changed: [testbed-manager] 2025-03-23 13:09:22.932351 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:09:22.933841 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:09:22.935663 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:09:22.935783 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:09:22.935810 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:09:22.936368 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:09:22.936722 | orchestrator | 2025-03-23 13:09:22.937197 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-23 13:09:22.937867 | orchestrator | Sunday 23 March 2025 13:09:22 +0000 (0:00:01.309) 0:04:16.824 ********** 2025-03-23 13:09:24.071400 | orchestrator | changed: [testbed-manager] 2025-03-23 13:09:24.074235 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:09:24.075834 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:09:24.075867 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:09:24.076890 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:09:24.078227 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:09:24.078811 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:09:24.079493 | orchestrator | 2025-03-23 13:09:24.080634 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-23 13:09:24.081566 | orchestrator | Sunday 23 March 2025 13:09:24 +0000 (0:00:01.140) 0:04:17.965 ********** 2025-03-23 13:09:24.190151 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:24.221511 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:24.259143 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:24.292256 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:24.369696 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:24.371061 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:24.374157 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:24.374510 | orchestrator | 2025-03-23 13:09:24.375401 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-23 13:09:24.376283 | orchestrator | Sunday 23 March 2025 13:09:24 +0000 (0:00:00.300) 0:04:18.265 ********** 2025-03-23 13:09:24.480854 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:24.536751 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:24.574154 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:24.623661 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:24.730510 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:24.731126 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:24.731459 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:24.731979 | orchestrator | 2025-03-23 13:09:24.732656 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-23 13:09:24.733098 | orchestrator | Sunday 23 March 2025 13:09:24 +0000 (0:00:00.360) 0:04:18.626 ********** 2025-03-23 13:09:24.852610 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:24.890480 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:24.952937 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:24.989922 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:25.087297 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:25.093572 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:25.093656 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:25.097894 | orchestrator | 2025-03-23 13:09:25.098200 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-23 13:09:25.098639 | orchestrator | Sunday 23 March 2025 13:09:25 +0000 (0:00:00.351) 0:04:18.978 ********** 2025-03-23 13:09:30.605942 | orchestrator | ok: [testbed-manager] 2025-03-23 13:09:30.606351 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:09:30.606384 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:09:30.606405 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:09:30.606677 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:09:30.607397 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:09:30.608052 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:09:30.608724 | orchestrator | 2025-03-23 13:09:30.609214 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-23 13:09:30.609737 | orchestrator | Sunday 23 March 2025 13:09:30 +0000 (0:00:05.522) 0:04:24.500 ********** 2025-03-23 13:09:31.082364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:09:31.083173 | orchestrator | 2025-03-23 13:09:31.086306 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-23 13:09:31.201588 | orchestrator | Sunday 23 March 2025 13:09:31 +0000 (0:00:00.475) 0:04:24.976 ********** 2025-03-23 13:09:31.201668 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.202727 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-23 13:09:31.202925 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.203211 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-23 13:09:31.245770 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:09:31.290883 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.290913 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-23 13:09:31.290933 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:09:31.336434 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.376625 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:09:31.376670 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-23 13:09:31.376685 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.376740 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-23 13:09:31.376763 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:09:31.465117 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.465299 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:09:31.465684 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-23 13:09:31.466102 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:09:31.466654 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-23 13:09:31.467302 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-23 13:09:31.467519 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:09:31.467921 | orchestrator | 2025-03-23 13:09:31.468562 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-23 13:09:31.469014 | orchestrator | Sunday 23 March 2025 13:09:31 +0000 (0:00:00.384) 0:04:25.361 ********** 2025-03-23 13:09:31.985201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:09:31.986355 | orchestrator | 2025-03-23 13:09:31.987255 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-23 13:09:31.988196 | orchestrator | Sunday 23 March 2025 13:09:31 +0000 (0:00:00.520) 0:04:25.881 ********** 2025-03-23 13:09:32.070145 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-23 13:09:32.119124 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-23 13:09:32.179222 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:09:32.179280 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:09:32.179640 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-23 13:09:32.180230 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-23 13:09:32.223681 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:09:32.224159 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-23 13:09:32.261389 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:09:32.350644 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-23 13:09:32.350721 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:09:32.351026 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:09:32.351902 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-23 13:09:32.352449 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:09:32.353286 | orchestrator | 2025-03-23 13:09:32.353977 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-23 13:09:32.354498 | orchestrator | Sunday 23 March 2025 13:09:32 +0000 (0:00:00.364) 0:04:26.245 ********** 2025-03-23 13:09:32.848583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:09:32.851607 | orchestrator | 2025-03-23 13:09:32.855371 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-23 13:09:32.855756 | orchestrator | Sunday 23 March 2025 13:09:32 +0000 (0:00:00.497) 0:04:26.743 ********** 2025-03-23 13:10:07.743413 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:07.747071 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:07.747119 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:07.747439 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:07.747464 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:07.747478 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:07.747493 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:07.747512 | orchestrator | 2025-03-23 13:10:07.749468 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-23 13:10:07.751320 | orchestrator | Sunday 23 March 2025 13:10:07 +0000 (0:00:34.893) 0:05:01.636 ********** 2025-03-23 13:10:15.564098 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:15.564362 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:15.566244 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:15.567336 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:15.567986 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:15.568828 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:15.569412 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:15.569983 | orchestrator | 2025-03-23 13:10:15.570672 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-23 13:10:15.571284 | orchestrator | Sunday 23 March 2025 13:10:15 +0000 (0:00:07.824) 0:05:09.460 ********** 2025-03-23 13:10:23.839723 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:23.840793 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:23.841433 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:23.842696 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:23.844666 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:23.847075 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:23.847770 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:23.848262 | orchestrator | 2025-03-23 13:10:23.849287 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-23 13:10:23.849808 | orchestrator | Sunday 23 March 2025 13:10:23 +0000 (0:00:08.273) 0:05:17.733 ********** 2025-03-23 13:10:25.794299 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:25.794909 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:25.796135 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:25.799503 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:25.800006 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:25.800940 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:25.801852 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:25.802821 | orchestrator | 2025-03-23 13:10:25.803732 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-23 13:10:25.804416 | orchestrator | Sunday 23 March 2025 13:10:25 +0000 (0:00:01.954) 0:05:19.688 ********** 2025-03-23 13:10:31.838663 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:31.839588 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:31.840667 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:31.841484 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:31.844670 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:31.845849 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:31.846495 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:31.848268 | orchestrator | 2025-03-23 13:10:31.848951 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-23 13:10:31.849923 | orchestrator | Sunday 23 March 2025 13:10:31 +0000 (0:00:06.042) 0:05:25.730 ********** 2025-03-23 13:10:32.269224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:10:32.269401 | orchestrator | 2025-03-23 13:10:32.270779 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-23 13:10:32.271219 | orchestrator | Sunday 23 March 2025 13:10:32 +0000 (0:00:00.434) 0:05:26.165 ********** 2025-03-23 13:10:33.074823 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:33.075134 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:33.075172 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:33.078675 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:33.078898 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:33.079187 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:33.079553 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:33.079914 | orchestrator | 2025-03-23 13:10:33.080240 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-23 13:10:33.080544 | orchestrator | Sunday 23 March 2025 13:10:33 +0000 (0:00:00.804) 0:05:26.969 ********** 2025-03-23 13:10:34.875732 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:34.877796 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:34.879027 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:34.879083 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:34.880265 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:34.881218 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:34.881768 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:34.882662 | orchestrator | 2025-03-23 13:10:34.883605 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-23 13:10:34.884428 | orchestrator | Sunday 23 March 2025 13:10:34 +0000 (0:00:01.799) 0:05:28.769 ********** 2025-03-23 13:10:35.753778 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:35.754539 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:35.754596 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:35.756021 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:35.756884 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:35.757868 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:35.758898 | orchestrator | changed: [testbed-manager] 2025-03-23 13:10:35.759751 | orchestrator | 2025-03-23 13:10:35.760149 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-23 13:10:35.761060 | orchestrator | Sunday 23 March 2025 13:10:35 +0000 (0:00:00.880) 0:05:29.649 ********** 2025-03-23 13:10:35.853486 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:35.889749 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:35.926547 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:35.964652 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:36.001531 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:36.060869 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:36.061256 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:36.063072 | orchestrator | 2025-03-23 13:10:36.063344 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-23 13:10:36.064678 | orchestrator | Sunday 23 March 2025 13:10:36 +0000 (0:00:00.306) 0:05:29.956 ********** 2025-03-23 13:10:36.139871 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:36.174238 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:36.212059 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:36.305433 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:36.513835 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:36.514129 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:36.515298 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:36.515806 | orchestrator | 2025-03-23 13:10:36.520150 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-23 13:10:36.521123 | orchestrator | Sunday 23 March 2025 13:10:36 +0000 (0:00:00.453) 0:05:30.409 ********** 2025-03-23 13:10:36.626324 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:36.685095 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:36.719747 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:36.760590 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:36.851650 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:36.852592 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:36.852699 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:36.854560 | orchestrator | 2025-03-23 13:10:36.855190 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-23 13:10:36.855742 | orchestrator | Sunday 23 March 2025 13:10:36 +0000 (0:00:00.337) 0:05:30.747 ********** 2025-03-23 13:10:36.919835 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:36.954202 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:36.998461 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:37.032312 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:37.076014 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:37.166938 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:37.167397 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:37.168183 | orchestrator | 2025-03-23 13:10:37.169016 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-23 13:10:37.173279 | orchestrator | Sunday 23 March 2025 13:10:37 +0000 (0:00:00.315) 0:05:31.062 ********** 2025-03-23 13:10:37.300150 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:37.340697 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:37.395860 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:37.436400 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:37.540806 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:37.542126 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:37.542161 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:37.542598 | orchestrator | 2025-03-23 13:10:37.544796 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-23 13:10:37.545919 | orchestrator | Sunday 23 March 2025 13:10:37 +0000 (0:00:00.373) 0:05:31.435 ********** 2025-03-23 13:10:37.654564 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:37.689213 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:37.738528 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:37.781607 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:37.860489 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:37.861715 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:37.862213 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:37.863639 | orchestrator | 2025-03-23 13:10:37.864198 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-23 13:10:37.865226 | orchestrator | Sunday 23 March 2025 13:10:37 +0000 (0:00:00.320) 0:05:31.756 ********** 2025-03-23 13:10:37.937058 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:38.012443 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:38.046907 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:38.086487 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:38.152844 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:38.153346 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:38.153992 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:38.154652 | orchestrator | 2025-03-23 13:10:38.155812 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-23 13:10:38.157233 | orchestrator | Sunday 23 March 2025 13:10:38 +0000 (0:00:00.293) 0:05:32.050 ********** 2025-03-23 13:10:38.745723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:10:38.747245 | orchestrator | 2025-03-23 13:10:38.751097 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-23 13:10:38.752383 | orchestrator | Sunday 23 March 2025 13:10:38 +0000 (0:00:00.590) 0:05:32.640 ********** 2025-03-23 13:10:39.664658 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:39.664822 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:39.664848 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:39.668276 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:39.668376 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:39.669086 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:39.669600 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:39.670175 | orchestrator | 2025-03-23 13:10:39.670713 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-23 13:10:39.671234 | orchestrator | Sunday 23 March 2025 13:10:39 +0000 (0:00:00.917) 0:05:33.557 ********** 2025-03-23 13:10:42.637146 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:10:42.637945 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:10:42.638709 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:10:42.639606 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:10:42.640613 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:10:42.641252 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:10:42.641854 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:42.642848 | orchestrator | 2025-03-23 13:10:42.643404 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-23 13:10:42.644755 | orchestrator | Sunday 23 March 2025 13:10:42 +0000 (0:00:02.974) 0:05:36.532 ********** 2025-03-23 13:10:42.739960 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-23 13:10:42.740899 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-23 13:10:42.741945 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-23 13:10:42.817214 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:10:42.817952 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-23 13:10:42.818715 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-23 13:10:42.819456 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-23 13:10:42.899727 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:10:42.900645 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-23 13:10:42.901422 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-23 13:10:42.902351 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-23 13:10:42.982427 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:10:42.983646 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-23 13:10:42.984567 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-23 13:10:43.057100 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-23 13:10:43.058003 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-23 13:10:43.059032 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-23 13:10:43.137509 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:10:43.138713 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-23 13:10:43.138747 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-23 13:10:43.142208 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-23 13:10:43.314847 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-23 13:10:43.314897 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:10:43.315165 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:10:43.315546 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-23 13:10:43.317922 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-23 13:10:43.321946 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-23 13:10:43.322071 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:10:43.322334 | orchestrator | 2025-03-23 13:10:43.322746 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-23 13:10:43.323121 | orchestrator | Sunday 23 March 2025 13:10:43 +0000 (0:00:00.674) 0:05:37.206 ********** 2025-03-23 13:10:49.714986 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:49.715166 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:49.716326 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:49.717887 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:49.718990 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:49.721541 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:49.723419 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:49.723455 | orchestrator | 2025-03-23 13:10:49.723675 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-23 13:10:49.723707 | orchestrator | Sunday 23 March 2025 13:10:49 +0000 (0:00:06.402) 0:05:43.608 ********** 2025-03-23 13:10:50.946668 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:50.946817 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:50.947009 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:50.947040 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:50.947247 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:50.948546 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:50.948811 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:50.949478 | orchestrator | 2025-03-23 13:10:50.949863 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-23 13:10:50.950480 | orchestrator | Sunday 23 March 2025 13:10:50 +0000 (0:00:01.230) 0:05:44.838 ********** 2025-03-23 13:10:58.484402 | orchestrator | ok: [testbed-manager] 2025-03-23 13:10:58.488664 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:10:58.490533 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:10:58.491386 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:10:58.494206 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:10:58.494645 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:10:58.495600 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:10:58.498869 | orchestrator | 2025-03-23 13:10:58.499338 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-23 13:10:58.500302 | orchestrator | Sunday 23 March 2025 13:10:58 +0000 (0:00:07.540) 0:05:52.379 ********** 2025-03-23 13:11:01.587753 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:01.589069 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:01.589824 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:01.591689 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:01.592341 | orchestrator | changed: [testbed-manager] 2025-03-23 13:11:01.593483 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:01.595277 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:01.596203 | orchestrator | 2025-03-23 13:11:01.596908 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-23 13:11:01.598460 | orchestrator | Sunday 23 March 2025 13:11:01 +0000 (0:00:03.100) 0:05:55.479 ********** 2025-03-23 13:11:02.947929 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:02.948699 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:02.949064 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:02.950002 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:02.950769 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:02.952295 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:02.952395 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:02.952952 | orchestrator | 2025-03-23 13:11:02.953505 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-23 13:11:02.956278 | orchestrator | Sunday 23 March 2025 13:11:02 +0000 (0:00:01.353) 0:05:56.833 ********** 2025-03-23 13:11:04.545183 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:04.546292 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:04.549425 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:04.550657 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:04.550694 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:04.550709 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:04.550731 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:04.551717 | orchestrator | 2025-03-23 13:11:04.552521 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-23 13:11:04.553383 | orchestrator | Sunday 23 March 2025 13:11:04 +0000 (0:00:01.606) 0:05:58.440 ********** 2025-03-23 13:11:04.756376 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:04.822579 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:04.891963 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:04.969617 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:05.330360 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:05.332410 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:15.054421 | orchestrator | changed: [testbed-manager] 2025-03-23 13:11:15.054511 | orchestrator | 2025-03-23 13:11:15.054529 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-23 13:11:15.054544 | orchestrator | Sunday 23 March 2025 13:11:05 +0000 (0:00:00.783) 0:05:59.224 ********** 2025-03-23 13:11:15.054571 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:15.055960 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:15.059559 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:15.060021 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:15.060049 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:15.060069 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:15.060321 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:15.060864 | orchestrator | 2025-03-23 13:11:15.061433 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-23 13:11:15.061865 | orchestrator | Sunday 23 March 2025 13:11:15 +0000 (0:00:09.717) 0:06:08.941 ********** 2025-03-23 13:11:15.607762 | orchestrator | changed: [testbed-manager] 2025-03-23 13:11:16.187776 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:16.189123 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:16.190687 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:16.190722 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:16.191297 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:16.195596 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:28.764332 | orchestrator | 2025-03-23 13:11:28.764461 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-23 13:11:28.764482 | orchestrator | Sunday 23 March 2025 13:11:16 +0000 (0:00:01.139) 0:06:10.081 ********** 2025-03-23 13:11:28.764513 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:28.764582 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:28.764601 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:28.764615 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:28.764629 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:28.764647 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:28.765040 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:28.765544 | orchestrator | 2025-03-23 13:11:28.766391 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-23 13:11:28.767037 | orchestrator | Sunday 23 March 2025 13:11:28 +0000 (0:00:12.569) 0:06:22.650 ********** 2025-03-23 13:11:41.745460 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:41.746141 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:41.746206 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:41.747745 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:41.750606 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:41.751358 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:41.752661 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:41.753010 | orchestrator | 2025-03-23 13:11:41.753733 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-23 13:11:41.754484 | orchestrator | Sunday 23 March 2025 13:11:41 +0000 (0:00:12.983) 0:06:35.634 ********** 2025-03-23 13:11:42.130135 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-23 13:11:43.116625 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-23 13:11:43.117345 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-23 13:11:43.117641 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-23 13:11:43.117670 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-23 13:11:43.118089 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-23 13:11:43.118875 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-23 13:11:43.119391 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-23 13:11:43.119552 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-23 13:11:43.122984 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-23 13:11:43.123197 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-23 13:11:43.123383 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-23 13:11:43.124233 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-23 13:11:43.124648 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-23 13:11:43.124676 | orchestrator | 2025-03-23 13:11:43.125466 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-23 13:11:43.261136 | orchestrator | Sunday 23 March 2025 13:11:43 +0000 (0:00:01.375) 0:06:37.010 ********** 2025-03-23 13:11:43.261213 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:43.338780 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:43.413276 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:43.485422 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:43.561762 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:43.709780 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:43.710323 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:43.710858 | orchestrator | 2025-03-23 13:11:43.711316 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-23 13:11:43.711856 | orchestrator | Sunday 23 March 2025 13:11:43 +0000 (0:00:00.594) 0:06:37.604 ********** 2025-03-23 13:11:48.309040 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:48.309643 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:48.309724 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:48.310493 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:48.313317 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:48.314421 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:48.314823 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:48.314857 | orchestrator | 2025-03-23 13:11:48.315949 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-23 13:11:48.316308 | orchestrator | Sunday 23 March 2025 13:11:48 +0000 (0:00:04.597) 0:06:42.201 ********** 2025-03-23 13:11:48.452176 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:48.514851 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:48.586466 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:48.841439 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:48.904659 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:49.014368 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:49.014995 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:49.015930 | orchestrator | 2025-03-23 13:11:49.017004 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-23 13:11:49.018084 | orchestrator | Sunday 23 March 2025 13:11:49 +0000 (0:00:00.704) 0:06:42.906 ********** 2025-03-23 13:11:49.094761 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-23 13:11:49.095908 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-23 13:11:49.162681 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:49.163748 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-23 13:11:49.163829 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-23 13:11:49.252125 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:49.252918 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-23 13:11:49.253293 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-23 13:11:49.338074 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:49.339566 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-23 13:11:49.340925 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-23 13:11:49.419897 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:49.420486 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-23 13:11:49.421501 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-23 13:11:49.499210 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:49.499420 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-23 13:11:49.637678 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-23 13:11:49.637814 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:49.638948 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-23 13:11:49.638986 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-23 13:11:49.639929 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:49.640655 | orchestrator | 2025-03-23 13:11:49.641943 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-23 13:11:49.643035 | orchestrator | Sunday 23 March 2025 13:11:49 +0000 (0:00:00.625) 0:06:43.532 ********** 2025-03-23 13:11:49.774237 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:49.854361 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:49.911946 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:49.982205 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:50.056965 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:50.176408 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:50.177380 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:50.178593 | orchestrator | 2025-03-23 13:11:50.179460 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-23 13:11:50.180485 | orchestrator | Sunday 23 March 2025 13:11:50 +0000 (0:00:00.538) 0:06:44.070 ********** 2025-03-23 13:11:50.315405 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:50.380486 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:50.448022 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:50.514654 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:50.580094 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:50.688089 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:50.688955 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:50.690003 | orchestrator | 2025-03-23 13:11:50.691490 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-23 13:11:50.692622 | orchestrator | Sunday 23 March 2025 13:11:50 +0000 (0:00:00.512) 0:06:44.582 ********** 2025-03-23 13:11:50.887214 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:11:50.959074 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:11:51.024801 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:11:51.094544 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:11:51.221023 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:11:51.221653 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:11:51.222333 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:11:51.223226 | orchestrator | 2025-03-23 13:11:51.227110 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-23 13:11:51.227803 | orchestrator | Sunday 23 March 2025 13:11:51 +0000 (0:00:00.534) 0:06:45.117 ********** 2025-03-23 13:11:57.831136 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:57.831490 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:57.832696 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:57.834091 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:57.835937 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:57.836767 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:11:57.836887 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:57.837191 | orchestrator | 2025-03-23 13:11:57.837585 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-23 13:11:57.837878 | orchestrator | Sunday 23 March 2025 13:11:57 +0000 (0:00:06.606) 0:06:51.724 ********** 2025-03-23 13:11:58.849057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:11:58.849474 | orchestrator | 2025-03-23 13:11:58.849508 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-23 13:11:58.849531 | orchestrator | Sunday 23 March 2025 13:11:58 +0000 (0:00:01.019) 0:06:52.743 ********** 2025-03-23 13:11:59.451341 | orchestrator | ok: [testbed-manager] 2025-03-23 13:11:59.943234 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:11:59.943806 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:11:59.944684 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:11:59.945157 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:11:59.945924 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:11:59.950556 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:00.401489 | orchestrator | 2025-03-23 13:12:00.401586 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-23 13:12:00.401604 | orchestrator | Sunday 23 March 2025 13:11:59 +0000 (0:00:01.091) 0:06:53.835 ********** 2025-03-23 13:12:00.401633 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:00.838843 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:00.839302 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:00.841112 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:00.841588 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:00.842812 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:00.843142 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:00.845018 | orchestrator | 2025-03-23 13:12:00.845626 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-23 13:12:00.846162 | orchestrator | Sunday 23 March 2025 13:12:00 +0000 (0:00:00.898) 0:06:54.734 ********** 2025-03-23 13:12:02.499181 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:02.499392 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:02.499429 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:02.500134 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:02.500617 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:02.500647 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:02.501599 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:02.503648 | orchestrator | 2025-03-23 13:12:02.505710 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-23 13:12:02.505998 | orchestrator | Sunday 23 March 2025 13:12:02 +0000 (0:00:01.660) 0:06:56.394 ********** 2025-03-23 13:12:02.663322 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:03.952958 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:03.953652 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:03.953697 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:03.955396 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:03.956126 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:03.960776 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:03.961414 | orchestrator | 2025-03-23 13:12:03.961450 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-23 13:12:05.477771 | orchestrator | Sunday 23 March 2025 13:12:03 +0000 (0:00:01.450) 0:06:57.845 ********** 2025-03-23 13:12:05.477904 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:05.477979 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:05.478407 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:05.479221 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:05.480221 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:05.480758 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:05.481241 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:05.481972 | orchestrator | 2025-03-23 13:12:05.482529 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-23 13:12:05.485341 | orchestrator | Sunday 23 March 2025 13:12:05 +0000 (0:00:01.527) 0:06:59.372 ********** 2025-03-23 13:12:07.032942 | orchestrator | changed: [testbed-manager] 2025-03-23 13:12:07.033198 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:07.033235 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:07.034078 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:07.034997 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:07.036630 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:07.037173 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:07.037213 | orchestrator | 2025-03-23 13:12:07.037394 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-23 13:12:07.037895 | orchestrator | Sunday 23 March 2025 13:12:07 +0000 (0:00:01.549) 0:07:00.922 ********** 2025-03-23 13:12:08.201683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:08.201977 | orchestrator | 2025-03-23 13:12:08.202560 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-23 13:12:08.203568 | orchestrator | Sunday 23 March 2025 13:12:08 +0000 (0:00:01.172) 0:07:02.095 ********** 2025-03-23 13:12:09.628972 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:09.629770 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:09.629880 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:09.631144 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:09.631466 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:09.632260 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:09.632470 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:09.632795 | orchestrator | 2025-03-23 13:12:09.633246 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-23 13:12:09.633652 | orchestrator | Sunday 23 March 2025 13:12:09 +0000 (0:00:01.426) 0:07:03.521 ********** 2025-03-23 13:12:10.820599 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:10.823513 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:10.823802 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:10.824027 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:10.824961 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:10.825487 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:10.826864 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:10.827128 | orchestrator | 2025-03-23 13:12:10.827854 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-23 13:12:10.828444 | orchestrator | Sunday 23 March 2025 13:12:10 +0000 (0:00:01.191) 0:07:04.712 ********** 2025-03-23 13:12:12.022311 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:12.024720 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:12.025192 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:12.025229 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:12.025985 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:12.026536 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:12.026723 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:12.027202 | orchestrator | 2025-03-23 13:12:12.027896 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-23 13:12:12.028118 | orchestrator | Sunday 23 March 2025 13:12:12 +0000 (0:00:01.202) 0:07:05.915 ********** 2025-03-23 13:12:12.725120 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:13.458254 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:13.458833 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:13.460191 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:13.461779 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:13.461875 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:13.461913 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:13.462005 | orchestrator | 2025-03-23 13:12:13.463024 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-23 13:12:13.463489 | orchestrator | Sunday 23 March 2025 13:12:13 +0000 (0:00:01.435) 0:07:07.351 ********** 2025-03-23 13:12:14.788832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:14.789654 | orchestrator | 2025-03-23 13:12:14.792854 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.793804 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:01.002) 0:07:08.354 ********** 2025-03-23 13:12:14.794731 | orchestrator | 2025-03-23 13:12:14.797318 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.801161 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.049) 0:07:08.404 ********** 2025-03-23 13:12:14.803877 | orchestrator | 2025-03-23 13:12:14.804598 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.805409 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.040) 0:07:08.444 ********** 2025-03-23 13:12:14.806823 | orchestrator | 2025-03-23 13:12:14.809600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.811577 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.042) 0:07:08.486 ********** 2025-03-23 13:12:14.813930 | orchestrator | 2025-03-23 13:12:14.817005 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.818063 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.059) 0:07:08.545 ********** 2025-03-23 13:12:14.818094 | orchestrator | 2025-03-23 13:12:14.818420 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.818475 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.047) 0:07:08.593 ********** 2025-03-23 13:12:14.818788 | orchestrator | 2025-03-23 13:12:14.822148 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-23 13:12:14.824216 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.039) 0:07:08.633 ********** 2025-03-23 13:12:14.824300 | orchestrator | 2025-03-23 13:12:14.824322 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-23 13:12:16.016090 | orchestrator | Sunday 23 March 2025 13:12:14 +0000 (0:00:00.049) 0:07:08.682 ********** 2025-03-23 13:12:16.016244 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:16.017508 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:16.018614 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:16.022560 | orchestrator | 2025-03-23 13:12:17.827882 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-23 13:12:17.827997 | orchestrator | Sunday 23 March 2025 13:12:16 +0000 (0:00:01.227) 0:07:09.910 ********** 2025-03-23 13:12:17.828034 | orchestrator | changed: [testbed-manager] 2025-03-23 13:12:17.829929 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:17.830154 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:17.830187 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:17.832104 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:17.832642 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:17.833333 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:17.834252 | orchestrator | 2025-03-23 13:12:17.834679 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-23 13:12:17.835066 | orchestrator | Sunday 23 March 2025 13:12:17 +0000 (0:00:01.808) 0:07:11.718 ********** 2025-03-23 13:12:19.048308 | orchestrator | changed: [testbed-manager] 2025-03-23 13:12:19.049323 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:19.050122 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:19.051810 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:19.053263 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:19.053554 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:19.054824 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:19.054921 | orchestrator | 2025-03-23 13:12:19.055885 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-23 13:12:19.056090 | orchestrator | Sunday 23 March 2025 13:12:19 +0000 (0:00:01.221) 0:07:12.940 ********** 2025-03-23 13:12:19.209408 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:21.161623 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:21.162550 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:21.162588 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:21.163071 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:21.165080 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:21.165899 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:21.166839 | orchestrator | 2025-03-23 13:12:21.167571 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-23 13:12:21.168109 | orchestrator | Sunday 23 March 2025 13:12:21 +0000 (0:00:02.114) 0:07:15.055 ********** 2025-03-23 13:12:21.262699 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:21.263683 | orchestrator | 2025-03-23 13:12:21.264247 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-23 13:12:21.265318 | orchestrator | Sunday 23 March 2025 13:12:21 +0000 (0:00:00.101) 0:07:15.156 ********** 2025-03-23 13:12:22.355867 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:22.358428 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:22.358947 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:22.360071 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:22.361224 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:22.361900 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:22.362721 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:22.363807 | orchestrator | 2025-03-23 13:12:22.364567 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-23 13:12:22.365018 | orchestrator | Sunday 23 March 2025 13:12:22 +0000 (0:00:01.092) 0:07:16.249 ********** 2025-03-23 13:12:22.564207 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:22.628543 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:22.700626 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:22.773159 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:23.077990 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:23.078228 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:23.079655 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:23.080488 | orchestrator | 2025-03-23 13:12:23.080955 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-23 13:12:23.081947 | orchestrator | Sunday 23 March 2025 13:12:23 +0000 (0:00:00.721) 0:07:16.970 ********** 2025-03-23 13:12:23.991065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:23.991782 | orchestrator | 2025-03-23 13:12:23.992001 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-23 13:12:23.998641 | orchestrator | Sunday 23 March 2025 13:12:23 +0000 (0:00:00.913) 0:07:17.883 ********** 2025-03-23 13:12:24.509249 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:24.973068 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:24.973498 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:24.974379 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:24.974873 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:24.978592 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:24.978871 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:24.983091 | orchestrator | 2025-03-23 13:12:24.989463 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-23 13:12:27.763908 | orchestrator | Sunday 23 March 2025 13:12:24 +0000 (0:00:00.982) 0:07:18.866 ********** 2025-03-23 13:12:27.764012 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-23 13:12:27.764548 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-23 13:12:27.764649 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-23 13:12:27.764676 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-23 13:12:27.764957 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-23 13:12:27.765339 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-23 13:12:27.767626 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-23 13:12:27.767997 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-23 13:12:27.768199 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-23 13:12:27.768852 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-23 13:12:27.770247 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-23 13:12:27.771101 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-23 13:12:27.771576 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-23 13:12:27.771923 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-23 13:12:27.772828 | orchestrator | 2025-03-23 13:12:27.773170 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-23 13:12:27.773202 | orchestrator | Sunday 23 March 2025 13:12:27 +0000 (0:00:02.791) 0:07:21.657 ********** 2025-03-23 13:12:27.894197 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:27.970806 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:28.044106 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:28.115238 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:28.192895 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:28.303613 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:28.304309 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:28.304979 | orchestrator | 2025-03-23 13:12:28.305547 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-23 13:12:28.308424 | orchestrator | Sunday 23 March 2025 13:12:28 +0000 (0:00:00.539) 0:07:22.197 ********** 2025-03-23 13:12:29.217070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:29.218089 | orchestrator | 2025-03-23 13:12:29.219216 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-23 13:12:29.219999 | orchestrator | Sunday 23 March 2025 13:12:29 +0000 (0:00:00.912) 0:07:23.109 ********** 2025-03-23 13:12:29.699629 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:30.169918 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:30.170728 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:30.171487 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:30.172246 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:30.172835 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:30.173653 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:30.174291 | orchestrator | 2025-03-23 13:12:30.174858 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-23 13:12:30.175640 | orchestrator | Sunday 23 March 2025 13:12:30 +0000 (0:00:00.953) 0:07:24.062 ********** 2025-03-23 13:12:30.615205 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:30.817087 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:31.299724 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:31.299936 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:31.300789 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:31.302985 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:31.303724 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:31.305198 | orchestrator | 2025-03-23 13:12:31.305873 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-23 13:12:31.306513 | orchestrator | Sunday 23 March 2025 13:12:31 +0000 (0:00:01.132) 0:07:25.194 ********** 2025-03-23 13:12:31.447198 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:31.523834 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:31.605156 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:31.671956 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:31.740177 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:31.853688 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:31.855134 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:31.856236 | orchestrator | 2025-03-23 13:12:31.857833 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-23 13:12:31.858224 | orchestrator | Sunday 23 March 2025 13:12:31 +0000 (0:00:00.554) 0:07:25.749 ********** 2025-03-23 13:12:33.312419 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:33.314344 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:33.314909 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:33.315122 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:33.315190 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:33.315966 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:33.321939 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:33.322068 | orchestrator | 2025-03-23 13:12:33.322103 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-23 13:12:33.322869 | orchestrator | Sunday 23 March 2025 13:12:33 +0000 (0:00:01.453) 0:07:27.202 ********** 2025-03-23 13:12:33.466818 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:33.532039 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:33.598148 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:33.675248 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:33.748662 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:33.855052 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:33.855346 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:33.856300 | orchestrator | 2025-03-23 13:12:33.857576 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-23 13:12:33.858493 | orchestrator | Sunday 23 March 2025 13:12:33 +0000 (0:00:00.547) 0:07:27.749 ********** 2025-03-23 13:12:36.095053 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:36.096393 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:36.097470 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:36.104196 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:37.508185 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:37.508263 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:37.508279 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:37.508294 | orchestrator | 2025-03-23 13:12:37.508310 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-23 13:12:37.508326 | orchestrator | Sunday 23 March 2025 13:12:36 +0000 (0:00:02.236) 0:07:29.986 ********** 2025-03-23 13:12:37.508354 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:37.511125 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:37.511161 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:37.511443 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:37.511474 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:37.512461 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:37.513558 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:37.514104 | orchestrator | 2025-03-23 13:12:37.514810 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-23 13:12:37.515340 | orchestrator | Sunday 23 March 2025 13:12:37 +0000 (0:00:01.413) 0:07:31.400 ********** 2025-03-23 13:12:39.375290 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:39.377587 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:39.379478 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:39.380673 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:39.382441 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:39.383628 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:39.384646 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:39.386002 | orchestrator | 2025-03-23 13:12:39.387110 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-23 13:12:39.387708 | orchestrator | Sunday 23 March 2025 13:12:39 +0000 (0:00:01.866) 0:07:33.266 ********** 2025-03-23 13:12:41.117409 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:41.118607 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:12:41.118649 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:12:41.119604 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:12:41.120294 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:12:41.121937 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:12:41.122486 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:12:41.123314 | orchestrator | 2025-03-23 13:12:41.123575 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-23 13:12:41.124924 | orchestrator | Sunday 23 March 2025 13:12:41 +0000 (0:00:01.745) 0:07:35.011 ********** 2025-03-23 13:12:41.814134 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:42.284139 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:42.285240 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:42.285927 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:42.287086 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:42.288124 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:42.289018 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:42.289539 | orchestrator | 2025-03-23 13:12:42.290256 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-23 13:12:42.290952 | orchestrator | Sunday 23 March 2025 13:12:42 +0000 (0:00:01.165) 0:07:36.177 ********** 2025-03-23 13:12:42.420055 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:42.485447 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:42.562718 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:42.644532 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:42.709289 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:43.135069 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:43.135551 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:43.136699 | orchestrator | 2025-03-23 13:12:43.137259 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-23 13:12:43.137955 | orchestrator | Sunday 23 March 2025 13:12:43 +0000 (0:00:00.852) 0:07:37.029 ********** 2025-03-23 13:12:43.266696 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:43.343829 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:43.428148 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:43.525702 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:43.601675 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:43.729696 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:43.731273 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:43.732008 | orchestrator | 2025-03-23 13:12:43.733173 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-23 13:12:43.733940 | orchestrator | Sunday 23 March 2025 13:12:43 +0000 (0:00:00.593) 0:07:37.622 ********** 2025-03-23 13:12:43.887255 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:43.956867 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:44.034638 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:44.115651 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:44.182670 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:44.297298 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:44.297450 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:44.298426 | orchestrator | 2025-03-23 13:12:44.299406 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-23 13:12:44.299858 | orchestrator | Sunday 23 March 2025 13:12:44 +0000 (0:00:00.569) 0:07:38.192 ********** 2025-03-23 13:12:44.439962 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:44.507327 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:44.793707 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:44.872993 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:44.943081 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:45.073657 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:45.074722 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:45.074983 | orchestrator | 2025-03-23 13:12:45.077027 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-23 13:12:45.077860 | orchestrator | Sunday 23 March 2025 13:12:45 +0000 (0:00:00.775) 0:07:38.967 ********** 2025-03-23 13:12:45.210207 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:45.299037 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:45.366426 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:45.434944 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:45.522155 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:45.656604 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:45.657456 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:45.659070 | orchestrator | 2025-03-23 13:12:45.659264 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-23 13:12:45.659874 | orchestrator | Sunday 23 March 2025 13:12:45 +0000 (0:00:00.583) 0:07:39.550 ********** 2025-03-23 13:12:50.865079 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:50.872678 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:50.873483 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:50.875734 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:50.877259 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:50.877612 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:50.878890 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:50.880077 | orchestrator | 2025-03-23 13:12:50.881432 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-23 13:12:50.881681 | orchestrator | Sunday 23 March 2025 13:12:50 +0000 (0:00:05.202) 0:07:44.753 ********** 2025-03-23 13:12:51.019923 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:12:51.166299 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:12:51.234278 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:12:51.301400 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:12:51.440746 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:12:51.441583 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:12:51.442744 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:12:51.443871 | orchestrator | 2025-03-23 13:12:51.445856 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-23 13:12:51.447044 | orchestrator | Sunday 23 March 2025 13:12:51 +0000 (0:00:00.583) 0:07:45.337 ********** 2025-03-23 13:12:52.544739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:52.544936 | orchestrator | 2025-03-23 13:12:52.545553 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-23 13:12:52.546286 | orchestrator | Sunday 23 March 2025 13:12:52 +0000 (0:00:01.101) 0:07:46.438 ********** 2025-03-23 13:12:54.432472 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:54.433992 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:54.435666 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:54.435976 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:54.436742 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:54.437274 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:54.438087 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:54.439038 | orchestrator | 2025-03-23 13:12:54.441582 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-23 13:12:55.616518 | orchestrator | Sunday 23 March 2025 13:12:54 +0000 (0:00:01.890) 0:07:48.328 ********** 2025-03-23 13:12:55.616634 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:55.617230 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:55.617376 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:55.623781 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:55.624142 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:55.624173 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:55.624922 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:55.625410 | orchestrator | 2025-03-23 13:12:55.626135 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-23 13:12:55.626383 | orchestrator | Sunday 23 March 2025 13:12:55 +0000 (0:00:01.181) 0:07:49.509 ********** 2025-03-23 13:12:56.556169 | orchestrator | ok: [testbed-manager] 2025-03-23 13:12:56.563100 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:12:56.563563 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:12:56.564701 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:12:56.565427 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:12:56.566233 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:12:56.567175 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:12:56.567871 | orchestrator | 2025-03-23 13:12:56.568433 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-23 13:12:56.569220 | orchestrator | Sunday 23 March 2025 13:12:56 +0000 (0:00:00.940) 0:07:50.450 ********** 2025-03-23 13:12:58.792289 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.795770 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.796150 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.796189 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.797646 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.798644 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.800474 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-23 13:12:58.801555 | orchestrator | 2025-03-23 13:12:58.802285 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-23 13:12:58.803478 | orchestrator | Sunday 23 March 2025 13:12:58 +0000 (0:00:02.234) 0:07:52.684 ********** 2025-03-23 13:12:59.664384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:12:59.665276 | orchestrator | 2025-03-23 13:12:59.666693 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-23 13:12:59.667111 | orchestrator | Sunday 23 March 2025 13:12:59 +0000 (0:00:00.874) 0:07:53.559 ********** 2025-03-23 13:13:09.157431 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:09.157595 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:09.158155 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:09.159652 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:09.160988 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:09.161656 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:09.162155 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:09.163796 | orchestrator | 2025-03-23 13:13:09.165151 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-23 13:13:09.165810 | orchestrator | Sunday 23 March 2025 13:13:09 +0000 (0:00:09.490) 0:08:03.050 ********** 2025-03-23 13:13:11.208005 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:11.208464 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:11.209288 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:11.210689 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:11.212184 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:11.213216 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:11.213869 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:11.215153 | orchestrator | 2025-03-23 13:13:11.216095 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-23 13:13:11.216351 | orchestrator | Sunday 23 March 2025 13:13:11 +0000 (0:00:02.050) 0:08:05.100 ********** 2025-03-23 13:13:12.518256 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:12.518934 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:12.519298 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:12.520218 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:12.521661 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:12.523057 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:12.523901 | orchestrator | 2025-03-23 13:13:12.524943 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-23 13:13:12.525768 | orchestrator | Sunday 23 March 2025 13:13:12 +0000 (0:00:01.309) 0:08:06.409 ********** 2025-03-23 13:13:14.027972 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:14.031055 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:14.031093 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:14.032351 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:14.033276 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:14.035711 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:14.036004 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:14.037259 | orchestrator | 2025-03-23 13:13:14.039172 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-23 13:13:14.040190 | orchestrator | 2025-03-23 13:13:14.041280 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-23 13:13:14.041668 | orchestrator | Sunday 23 March 2025 13:13:14 +0000 (0:00:01.508) 0:08:07.918 ********** 2025-03-23 13:13:14.188041 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:13:14.279786 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:13:14.344711 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:13:14.407075 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:13:14.481059 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:13:14.613241 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:13:14.614926 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:13:14.615696 | orchestrator | 2025-03-23 13:13:14.617128 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-23 13:13:14.617395 | orchestrator | 2025-03-23 13:13:14.618785 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-23 13:13:14.619005 | orchestrator | Sunday 23 March 2025 13:13:14 +0000 (0:00:00.588) 0:08:08.507 ********** 2025-03-23 13:13:16.027035 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:16.027980 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:16.029232 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:16.030153 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:16.030847 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:16.033672 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:16.033757 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:16.035412 | orchestrator | 2025-03-23 13:13:16.036802 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-23 13:13:16.037109 | orchestrator | Sunday 23 March 2025 13:13:16 +0000 (0:00:01.412) 0:08:09.920 ********** 2025-03-23 13:13:17.545775 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:17.548997 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:17.550529 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:17.550558 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:17.551335 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:17.552600 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:17.554240 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:17.555630 | orchestrator | 2025-03-23 13:13:17.556613 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-23 13:13:17.557081 | orchestrator | Sunday 23 March 2025 13:13:17 +0000 (0:00:01.519) 0:08:11.439 ********** 2025-03-23 13:13:17.693974 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:13:17.766976 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:13:17.834787 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:13:18.079204 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:13:18.153590 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:13:18.606441 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:13:18.606585 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:13:18.606603 | orchestrator | 2025-03-23 13:13:18.606618 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-23 13:13:18.606636 | orchestrator | Sunday 23 March 2025 13:13:18 +0000 (0:00:01.058) 0:08:12.498 ********** 2025-03-23 13:13:19.954697 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:19.957274 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:19.958938 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:19.958998 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:19.961133 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:19.963152 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:19.964788 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:19.964823 | orchestrator | 2025-03-23 13:13:19.965273 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-23 13:13:19.966313 | orchestrator | 2025-03-23 13:13:19.967389 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-23 13:13:19.968393 | orchestrator | Sunday 23 March 2025 13:13:19 +0000 (0:00:01.349) 0:08:13.847 ********** 2025-03-23 13:13:20.949744 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:13:20.950819 | orchestrator | 2025-03-23 13:13:20.952026 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-23 13:13:20.953159 | orchestrator | Sunday 23 March 2025 13:13:20 +0000 (0:00:00.997) 0:08:14.844 ********** 2025-03-23 13:13:21.564105 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:21.654169 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:22.124398 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:22.124548 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:22.125408 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:22.126986 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:22.127055 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:22.127782 | orchestrator | 2025-03-23 13:13:22.128059 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-23 13:13:22.128955 | orchestrator | Sunday 23 March 2025 13:13:22 +0000 (0:00:01.173) 0:08:16.018 ********** 2025-03-23 13:13:23.373315 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:23.373930 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:23.375387 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:23.375541 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:23.376518 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:23.376950 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:23.377423 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:23.378739 | orchestrator | 2025-03-23 13:13:23.378870 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-23 13:13:23.378890 | orchestrator | Sunday 23 March 2025 13:13:23 +0000 (0:00:01.250) 0:08:17.268 ********** 2025-03-23 13:13:24.643001 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:13:24.643178 | orchestrator | 2025-03-23 13:13:24.644005 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-23 13:13:24.644506 | orchestrator | Sunday 23 March 2025 13:13:24 +0000 (0:00:01.267) 0:08:18.536 ********** 2025-03-23 13:13:25.600738 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:25.602917 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:25.607341 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:25.607682 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:25.607712 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:25.607732 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:25.609313 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:25.609795 | orchestrator | 2025-03-23 13:13:25.610221 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-23 13:13:25.611368 | orchestrator | Sunday 23 March 2025 13:13:25 +0000 (0:00:00.957) 0:08:19.493 ********** 2025-03-23 13:13:26.845416 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:26.846208 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:26.849120 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:26.849512 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:26.850892 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:26.851869 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:26.853064 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:26.854369 | orchestrator | 2025-03-23 13:13:26.857094 | orchestrator | 2025-03-23 13:13:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:13:26.857220 | orchestrator | 2025-03-23 13:13:26 | INFO  | Please wait and do not abort execution. 2025-03-23 13:13:26.857249 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:13:26.858262 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-23 13:13:26.859011 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-23 13:13:26.860298 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-23 13:13:26.862409 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-23 13:13:26.863511 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-23 13:13:26.867227 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-23 13:13:26.868695 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-23 13:13:26.868718 | orchestrator | 2025-03-23 13:13:26.868732 | orchestrator | Sunday 23 March 2025 13:13:26 +0000 (0:00:01.243) 0:08:20.737 ********** 2025-03-23 13:13:26.868746 | orchestrator | =============================================================================== 2025-03-23 13:13:26.868763 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.51s 2025-03-23 13:13:26.870327 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.83s 2025-03-23 13:13:26.872046 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.89s 2025-03-23 13:13:26.872531 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.18s 2025-03-23 13:13:26.873132 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.87s 2025-03-23 13:13:26.873807 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.57s 2025-03-23 13:13:26.874251 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.98s 2025-03-23 13:13:26.875672 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.57s 2025-03-23 13:13:26.876627 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.72s 2025-03-23 13:13:26.877266 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.49s 2025-03-23 13:13:26.877293 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.12s 2025-03-23 13:13:26.878443 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.35s 2025-03-23 13:13:26.881064 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.27s 2025-03-23 13:13:26.882641 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.82s 2025-03-23 13:13:26.884760 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.54s 2025-03-23 13:13:26.886600 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.13s 2025-03-23 13:13:26.887564 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.61s 2025-03-23 13:13:26.888346 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.40s 2025-03-23 13:13:26.889423 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.04s 2025-03-23 13:13:26.889701 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2025-03-23 13:13:28.145266 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-23 13:13:30.350800 | orchestrator | + osism apply network 2025-03-23 13:13:30.350980 | orchestrator | 2025-03-23 13:13:30 | INFO  | Task 3a4c331f-9807-4863-aa90-512dae5e6029 (network) was prepared for execution. 2025-03-23 13:13:34.084210 | orchestrator | 2025-03-23 13:13:30 | INFO  | It takes a moment until task 3a4c331f-9807-4863-aa90-512dae5e6029 (network) has been started and output is visible here. 2025-03-23 13:13:34.084313 | orchestrator | 2025-03-23 13:13:34.084696 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-23 13:13:34.085765 | orchestrator | 2025-03-23 13:13:34.086233 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-23 13:13:34.087775 | orchestrator | Sunday 23 March 2025 13:13:34 +0000 (0:00:00.247) 0:00:00.247 ********** 2025-03-23 13:13:34.243116 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:34.326746 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:34.414807 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:34.519219 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:34.620070 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:34.889067 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:34.889488 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:34.891048 | orchestrator | 2025-03-23 13:13:34.894548 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-23 13:13:34.895449 | orchestrator | Sunday 23 March 2025 13:13:34 +0000 (0:00:00.804) 0:00:01.051 ********** 2025-03-23 13:13:36.211510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:13:36.212313 | orchestrator | 2025-03-23 13:13:36.217181 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-23 13:13:36.217513 | orchestrator | Sunday 23 March 2025 13:13:36 +0000 (0:00:01.321) 0:00:02.373 ********** 2025-03-23 13:13:38.373492 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:38.374153 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:38.378142 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:38.378715 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:38.378743 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:38.379899 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:38.381300 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:38.383009 | orchestrator | 2025-03-23 13:13:38.384028 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-23 13:13:38.384515 | orchestrator | Sunday 23 March 2025 13:13:38 +0000 (0:00:02.161) 0:00:04.534 ********** 2025-03-23 13:13:40.184794 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:40.185047 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:40.186568 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:40.188019 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:40.189486 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:40.189824 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:40.190295 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:40.190934 | orchestrator | 2025-03-23 13:13:40.191533 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-23 13:13:40.192481 | orchestrator | Sunday 23 March 2025 13:13:40 +0000 (0:00:01.803) 0:00:06.338 ********** 2025-03-23 13:13:41.326647 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-23 13:13:41.328012 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-23 13:13:41.329072 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-23 13:13:41.330940 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-23 13:13:41.331480 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-23 13:13:41.332337 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-23 13:13:41.333405 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-23 13:13:41.334218 | orchestrator | 2025-03-23 13:13:41.334253 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-23 13:13:41.334891 | orchestrator | Sunday 23 March 2025 13:13:41 +0000 (0:00:01.150) 0:00:07.488 ********** 2025-03-23 13:13:43.269707 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-23 13:13:43.270515 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:13:43.274114 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:13:43.276097 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-23 13:13:43.276130 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:13:43.276983 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-23 13:13:43.277756 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-23 13:13:43.278543 | orchestrator | 2025-03-23 13:13:43.279140 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-23 13:13:43.279505 | orchestrator | Sunday 23 March 2025 13:13:43 +0000 (0:00:01.945) 0:00:09.434 ********** 2025-03-23 13:13:45.033721 | orchestrator | changed: [testbed-manager] 2025-03-23 13:13:45.033955 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:45.035029 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:45.035658 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:45.040165 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:45.040419 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:45.041266 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:45.042209 | orchestrator | 2025-03-23 13:13:45.042689 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-23 13:13:45.043240 | orchestrator | Sunday 23 March 2025 13:13:45 +0000 (0:00:01.760) 0:00:11.194 ********** 2025-03-23 13:13:45.562009 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:13:46.044788 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:13:46.044992 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-23 13:13:46.045814 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-23 13:13:46.047127 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:13:46.047722 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-23 13:13:46.048862 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-23 13:13:46.052768 | orchestrator | 2025-03-23 13:13:46.053445 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-23 13:13:46.053733 | orchestrator | Sunday 23 March 2025 13:13:46 +0000 (0:00:01.016) 0:00:12.211 ********** 2025-03-23 13:13:46.508740 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:46.599840 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:47.268786 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:47.269249 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:47.269285 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:47.270010 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:47.271288 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:47.272039 | orchestrator | 2025-03-23 13:13:47.275892 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-23 13:13:47.276148 | orchestrator | Sunday 23 March 2025 13:13:47 +0000 (0:00:01.219) 0:00:13.431 ********** 2025-03-23 13:13:47.438849 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:13:47.533711 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:13:47.625291 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:13:47.708084 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:13:47.795241 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:13:48.142583 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:13:48.143048 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:13:48.143992 | orchestrator | 2025-03-23 13:13:48.144637 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-23 13:13:48.145676 | orchestrator | Sunday 23 March 2025 13:13:48 +0000 (0:00:00.874) 0:00:14.306 ********** 2025-03-23 13:13:50.152190 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:50.156651 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:50.157995 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:50.159515 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:50.160763 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:50.161477 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:50.164848 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:50.165742 | orchestrator | 2025-03-23 13:13:50.166567 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-23 13:13:50.167616 | orchestrator | Sunday 23 March 2025 13:13:50 +0000 (0:00:02.010) 0:00:16.316 ********** 2025-03-23 13:13:52.029627 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-23 13:13:52.029750 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.029770 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.029784 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.029802 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.030350 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.031346 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.032454 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-23 13:13:52.033751 | orchestrator | 2025-03-23 13:13:52.034004 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-23 13:13:52.035086 | orchestrator | Sunday 23 March 2025 13:13:52 +0000 (0:00:01.869) 0:00:18.186 ********** 2025-03-23 13:13:53.547614 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:53.547774 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:13:53.549487 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:13:53.549836 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:13:53.552233 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:13:53.552901 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:13:53.553983 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:13:53.556096 | orchestrator | 2025-03-23 13:13:53.556893 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-23 13:13:53.557828 | orchestrator | Sunday 23 March 2025 13:13:53 +0000 (0:00:01.526) 0:00:19.712 ********** 2025-03-23 13:13:54.988052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:13:54.991476 | orchestrator | 2025-03-23 13:13:56.564946 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-23 13:13:56.565053 | orchestrator | Sunday 23 March 2025 13:13:54 +0000 (0:00:01.436) 0:00:21.148 ********** 2025-03-23 13:13:56.565084 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:56.566251 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:56.570124 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:56.570192 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:56.570212 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:56.570228 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:56.570247 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:56.571000 | orchestrator | 2025-03-23 13:13:56.572477 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-23 13:13:56.573257 | orchestrator | Sunday 23 March 2025 13:13:56 +0000 (0:00:01.580) 0:00:22.729 ********** 2025-03-23 13:13:56.809776 | orchestrator | ok: [testbed-manager] 2025-03-23 13:13:56.899588 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:13:57.171259 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:13:57.260351 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:13:57.350980 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:13:57.508939 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:13:57.510097 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:13:57.511242 | orchestrator | 2025-03-23 13:13:57.511815 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-23 13:13:57.512843 | orchestrator | Sunday 23 March 2025 13:13:57 +0000 (0:00:00.942) 0:00:23.671 ********** 2025-03-23 13:13:57.989177 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:57.989587 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:57.990260 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:57.991171 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.555142 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:58.556974 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.558005 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:58.558989 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.559813 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:58.562209 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.562942 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:58.562971 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.562994 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-23 13:13:58.563523 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-23 13:13:58.564150 | orchestrator | 2025-03-23 13:13:58.565184 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-23 13:13:58.566929 | orchestrator | Sunday 23 March 2025 13:13:58 +0000 (0:00:01.047) 0:00:24.719 ********** 2025-03-23 13:13:58.950554 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:13:59.038064 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:13:59.127006 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:13:59.214935 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:13:59.303314 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:14:00.536614 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:14:00.539304 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:14:00.539882 | orchestrator | 2025-03-23 13:14:00.541787 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-23 13:14:00.543051 | orchestrator | Sunday 23 March 2025 13:14:00 +0000 (0:00:01.980) 0:00:26.699 ********** 2025-03-23 13:14:00.684169 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:14:00.753160 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:14:00.957742 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:14:01.032248 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:14:01.105417 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:14:01.138945 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:14:01.139270 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:14:01.139743 | orchestrator | 2025-03-23 13:14:01.140458 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:14:01.140554 | orchestrator | 2025-03-23 13:14:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:14:01.141704 | orchestrator | 2025-03-23 13:14:01 | INFO  | Please wait and do not abort execution. 2025-03-23 13:14:01.141738 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.142110 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.142140 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.142163 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.142429 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.143029 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.143359 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:14:01.143970 | orchestrator | 2025-03-23 13:14:01.144355 | orchestrator | Sunday 23 March 2025 13:14:01 +0000 (0:00:00.606) 0:00:27.306 ********** 2025-03-23 13:14:01.144913 | orchestrator | =============================================================================== 2025-03-23 13:14:01.145326 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.16s 2025-03-23 13:14:01.145710 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.01s 2025-03-23 13:14:01.146515 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.98s 2025-03-23 13:14:01.147112 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.95s 2025-03-23 13:14:01.148101 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.87s 2025-03-23 13:14:01.149181 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2025-03-23 13:14:01.150264 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.76s 2025-03-23 13:14:01.150905 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.58s 2025-03-23 13:14:01.151597 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.53s 2025-03-23 13:14:01.152210 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.44s 2025-03-23 13:14:01.152772 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.32s 2025-03-23 13:14:01.153414 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2025-03-23 13:14:01.154002 | orchestrator | osism.commons.network : Create required directories --------------------- 1.15s 2025-03-23 13:14:01.154492 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2025-03-23 13:14:01.155512 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.02s 2025-03-23 13:14:01.156075 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.94s 2025-03-23 13:14:01.156567 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.87s 2025-03-23 13:14:01.157311 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.80s 2025-03-23 13:14:01.157826 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.61s 2025-03-23 13:14:01.521058 | orchestrator | + osism apply wireguard 2025-03-23 13:14:02.872555 | orchestrator | 2025-03-23 13:14:02 | INFO  | Task 9fb544c6-d8b1-4a57-bedf-989907b6caa0 (wireguard) was prepared for execution. 2025-03-23 13:14:06.174512 | orchestrator | 2025-03-23 13:14:02 | INFO  | It takes a moment until task 9fb544c6-d8b1-4a57-bedf-989907b6caa0 (wireguard) has been started and output is visible here. 2025-03-23 13:14:06.174631 | orchestrator | 2025-03-23 13:14:06.175078 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-23 13:14:06.175669 | orchestrator | 2025-03-23 13:14:06.176706 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-23 13:14:06.177129 | orchestrator | Sunday 23 March 2025 13:14:06 +0000 (0:00:00.181) 0:00:00.181 ********** 2025-03-23 13:14:07.785270 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:07.786377 | orchestrator | 2025-03-23 13:14:07.789221 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-23 13:14:07.790121 | orchestrator | Sunday 23 March 2025 13:14:07 +0000 (0:00:01.606) 0:00:01.788 ********** 2025-03-23 13:14:14.855590 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:14.856155 | orchestrator | 2025-03-23 13:14:14.856302 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-23 13:14:14.857166 | orchestrator | Sunday 23 March 2025 13:14:14 +0000 (0:00:07.074) 0:00:08.862 ********** 2025-03-23 13:14:15.409607 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:15.409748 | orchestrator | 2025-03-23 13:14:15.411993 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-23 13:14:15.412255 | orchestrator | Sunday 23 March 2025 13:14:15 +0000 (0:00:00.552) 0:00:09.416 ********** 2025-03-23 13:14:15.871276 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:15.872063 | orchestrator | 2025-03-23 13:14:15.873282 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-23 13:14:15.873700 | orchestrator | Sunday 23 March 2025 13:14:15 +0000 (0:00:00.462) 0:00:09.879 ********** 2025-03-23 13:14:16.438143 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:16.438282 | orchestrator | 2025-03-23 13:14:16.438539 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-23 13:14:16.439340 | orchestrator | Sunday 23 March 2025 13:14:16 +0000 (0:00:00.567) 0:00:10.447 ********** 2025-03-23 13:14:17.040577 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:17.040790 | orchestrator | 2025-03-23 13:14:17.041524 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-23 13:14:17.042081 | orchestrator | Sunday 23 March 2025 13:14:17 +0000 (0:00:00.600) 0:00:11.048 ********** 2025-03-23 13:14:17.471367 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:17.471701 | orchestrator | 2025-03-23 13:14:17.472725 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-23 13:14:17.473936 | orchestrator | Sunday 23 March 2025 13:14:17 +0000 (0:00:00.431) 0:00:11.479 ********** 2025-03-23 13:14:18.768150 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:18.768585 | orchestrator | 2025-03-23 13:14:18.769709 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-23 13:14:18.770132 | orchestrator | Sunday 23 March 2025 13:14:18 +0000 (0:00:01.295) 0:00:12.775 ********** 2025-03-23 13:14:19.738798 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-23 13:14:19.741144 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:19.741625 | orchestrator | 2025-03-23 13:14:19.741646 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-23 13:14:19.742422 | orchestrator | Sunday 23 March 2025 13:14:19 +0000 (0:00:00.971) 0:00:13.746 ********** 2025-03-23 13:14:21.822762 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:21.823129 | orchestrator | 2025-03-23 13:14:21.823171 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-23 13:14:21.823689 | orchestrator | Sunday 23 March 2025 13:14:21 +0000 (0:00:02.081) 0:00:15.827 ********** 2025-03-23 13:14:22.824542 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:22.825419 | orchestrator | 2025-03-23 13:14:22.825588 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:14:22.826777 | orchestrator | 2025-03-23 13:14:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:14:22.827071 | orchestrator | 2025-03-23 13:14:22 | INFO  | Please wait and do not abort execution. 2025-03-23 13:14:22.827099 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:14:22.827962 | orchestrator | 2025-03-23 13:14:22.828445 | orchestrator | Sunday 23 March 2025 13:14:22 +0000 (0:00:01.006) 0:00:16.834 ********** 2025-03-23 13:14:22.830066 | orchestrator | =============================================================================== 2025-03-23 13:14:22.830361 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.07s 2025-03-23 13:14:22.831395 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.08s 2025-03-23 13:14:22.831706 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2025-03-23 13:14:22.831728 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.30s 2025-03-23 13:14:22.831746 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2025-03-23 13:14:22.832566 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-03-23 13:14:22.832743 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-03-23 13:14:22.832787 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2025-03-23 13:14:22.832970 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-03-23 13:14:22.833113 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-03-23 13:14:22.833545 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-03-23 13:14:23.414604 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-23 13:14:23.453219 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-23 13:14:23.453303 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-23 13:14:23.523608 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 211 0 --:--:-- --:--:-- --:--:-- 211 100 15 100 15 0 0 211 0 --:--:-- --:--:-- --:--:-- 211 2025-03-23 13:14:23.536610 | orchestrator | + osism apply --environment custom workarounds 2025-03-23 13:14:25.008712 | orchestrator | 2025-03-23 13:14:25 | INFO  | Trying to run play workarounds in environment custom 2025-03-23 13:14:25.058875 | orchestrator | 2025-03-23 13:14:25 | INFO  | Task 2c10bf63-51ce-45c5-9fdd-d566e166b83c (workarounds) was prepared for execution. 2025-03-23 13:14:28.397050 | orchestrator | 2025-03-23 13:14:25 | INFO  | It takes a moment until task 2c10bf63-51ce-45c5-9fdd-d566e166b83c (workarounds) has been started and output is visible here. 2025-03-23 13:14:28.397166 | orchestrator | 2025-03-23 13:14:28.397305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:14:28.399638 | orchestrator | 2025-03-23 13:14:28.401256 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-23 13:14:28.402419 | orchestrator | Sunday 23 March 2025 13:14:28 +0000 (0:00:00.220) 0:00:00.220 ********** 2025-03-23 13:14:28.647492 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-23 13:14:28.753844 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-23 13:14:28.846781 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-23 13:14:28.934496 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-23 13:14:29.024704 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-23 13:14:29.353814 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-23 13:14:29.355696 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-23 13:14:29.357549 | orchestrator | 2025-03-23 13:14:29.358412 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-23 13:14:29.358445 | orchestrator | 2025-03-23 13:14:29.359078 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-23 13:14:29.359992 | orchestrator | Sunday 23 March 2025 13:14:29 +0000 (0:00:00.960) 0:00:01.180 ********** 2025-03-23 13:14:32.352430 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:32.353437 | orchestrator | 2025-03-23 13:14:32.354199 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-23 13:14:32.355496 | orchestrator | 2025-03-23 13:14:32.356065 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-23 13:14:32.356631 | orchestrator | Sunday 23 March 2025 13:14:32 +0000 (0:00:02.994) 0:00:04.174 ********** 2025-03-23 13:14:34.178321 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:14:34.182139 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:14:34.183062 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:14:34.183104 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:14:34.187232 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:14:34.189535 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:14:34.193236 | orchestrator | 2025-03-23 13:14:34.194396 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-23 13:14:34.195652 | orchestrator | 2025-03-23 13:14:34.196443 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-23 13:14:34.197086 | orchestrator | Sunday 23 March 2025 13:14:34 +0000 (0:00:01.824) 0:00:05.999 ********** 2025-03-23 13:14:35.863312 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.864266 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.866794 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.866853 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.867497 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.870609 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-23 13:14:35.872848 | orchestrator | 2025-03-23 13:14:35.873664 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-23 13:14:35.874319 | orchestrator | Sunday 23 March 2025 13:14:35 +0000 (0:00:01.687) 0:00:07.687 ********** 2025-03-23 13:14:39.705384 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:14:39.707140 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:14:39.707191 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:14:39.707250 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:14:39.707314 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:14:39.710464 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:14:39.710808 | orchestrator | 2025-03-23 13:14:39.711110 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-23 13:14:39.711180 | orchestrator | Sunday 23 March 2025 13:14:39 +0000 (0:00:03.841) 0:00:11.528 ********** 2025-03-23 13:14:39.860189 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:14:39.943160 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:14:40.037644 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:14:40.309034 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:14:40.471654 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:14:40.473053 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:14:40.474819 | orchestrator | 2025-03-23 13:14:40.478884 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-23 13:14:40.479250 | orchestrator | 2025-03-23 13:14:40.479965 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-23 13:14:40.480252 | orchestrator | Sunday 23 March 2025 13:14:40 +0000 (0:00:00.768) 0:00:12.297 ********** 2025-03-23 13:14:42.219622 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:42.224318 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:14:42.224419 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:14:42.225934 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:14:42.227064 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:14:42.228162 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:14:42.229139 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:14:42.230121 | orchestrator | 2025-03-23 13:14:42.230763 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-23 13:14:42.233182 | orchestrator | Sunday 23 March 2025 13:14:42 +0000 (0:00:01.748) 0:00:14.045 ********** 2025-03-23 13:14:44.002536 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:44.002745 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:14:44.003984 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:14:44.004806 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:14:44.006739 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:14:44.007248 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:14:44.007282 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:14:44.008005 | orchestrator | 2025-03-23 13:14:44.008646 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-23 13:14:44.009137 | orchestrator | Sunday 23 March 2025 13:14:43 +0000 (0:00:01.778) 0:00:15.824 ********** 2025-03-23 13:14:45.564419 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:14:45.564693 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:14:45.566135 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:14:45.567307 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:14:45.569030 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:45.569969 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:14:45.570865 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:14:45.571621 | orchestrator | 2025-03-23 13:14:45.572286 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-23 13:14:45.572813 | orchestrator | Sunday 23 March 2025 13:14:45 +0000 (0:00:01.566) 0:00:17.390 ********** 2025-03-23 13:14:47.594532 | orchestrator | changed: [testbed-manager] 2025-03-23 13:14:47.598433 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:14:47.598875 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:14:47.598912 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:14:47.598928 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:14:47.598975 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:14:47.600207 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:14:47.601270 | orchestrator | 2025-03-23 13:14:47.601681 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-23 13:14:47.602260 | orchestrator | Sunday 23 March 2025 13:14:47 +0000 (0:00:02.030) 0:00:19.420 ********** 2025-03-23 13:14:47.754297 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:14:47.835480 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:14:47.925149 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:14:48.016624 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:14:48.288894 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:14:48.462349 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:14:48.462827 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:14:48.463607 | orchestrator | 2025-03-23 13:14:48.464237 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-23 13:14:48.464911 | orchestrator | 2025-03-23 13:14:48.465667 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-23 13:14:48.466531 | orchestrator | Sunday 23 March 2025 13:14:48 +0000 (0:00:00.866) 0:00:20.286 ********** 2025-03-23 13:14:51.020309 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:14:51.020837 | orchestrator | ok: [testbed-manager] 2025-03-23 13:14:51.022740 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:14:51.022874 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:14:51.024230 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:14:51.027339 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:14:51.028184 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:14:51.028937 | orchestrator | 2025-03-23 13:14:51.028972 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:14:51.029195 | orchestrator | 2025-03-23 13:14:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:14:51.029917 | orchestrator | 2025-03-23 13:14:51 | INFO  | Please wait and do not abort execution. 2025-03-23 13:14:51.029945 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:14:51.030150 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.030832 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.031169 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.031185 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.031193 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.031570 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:14:51.031994 | orchestrator | 2025-03-23 13:14:51.032603 | orchestrator | Sunday 23 March 2025 13:14:51 +0000 (0:00:02.559) 0:00:22.846 ********** 2025-03-23 13:14:51.033040 | orchestrator | =============================================================================== 2025-03-23 13:14:51.033254 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2025-03-23 13:14:51.033274 | orchestrator | Apply netplan configuration --------------------------------------------- 2.99s 2025-03-23 13:14:51.034107 | orchestrator | Install python3-docker -------------------------------------------------- 2.56s 2025-03-23 13:14:51.034224 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.03s 2025-03-23 13:14:51.034517 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-03-23 13:14:51.034662 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.78s 2025-03-23 13:14:51.034901 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.75s 2025-03-23 13:14:51.035170 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.69s 2025-03-23 13:14:51.035642 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2025-03-23 13:14:51.035925 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.96s 2025-03-23 13:14:51.036067 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.87s 2025-03-23 13:14:51.036183 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-03-23 13:14:51.622527 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-23 13:14:53.127232 | orchestrator | 2025-03-23 13:14:53 | INFO  | Task 5cf3605d-84e5-47c1-87db-5fc4ae25fd0d (reboot) was prepared for execution. 2025-03-23 13:14:56.409554 | orchestrator | 2025-03-23 13:14:53 | INFO  | It takes a moment until task 5cf3605d-84e5-47c1-87db-5fc4ae25fd0d (reboot) has been started and output is visible here. 2025-03-23 13:14:56.409655 | orchestrator | 2025-03-23 13:14:56.411188 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:14:56.413215 | orchestrator | 2025-03-23 13:14:56.413921 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:14:56.414758 | orchestrator | Sunday 23 March 2025 13:14:56 +0000 (0:00:00.155) 0:00:00.155 ********** 2025-03-23 13:14:56.501323 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:14:56.501732 | orchestrator | 2025-03-23 13:14:56.503068 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:14:56.505099 | orchestrator | Sunday 23 March 2025 13:14:56 +0000 (0:00:00.095) 0:00:00.250 ********** 2025-03-23 13:14:57.508083 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:14:57.508519 | orchestrator | 2025-03-23 13:14:57.508657 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:14:57.512609 | orchestrator | Sunday 23 March 2025 13:14:57 +0000 (0:00:01.003) 0:00:01.254 ********** 2025-03-23 13:14:57.628054 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:14:57.628852 | orchestrator | 2025-03-23 13:14:57.630988 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:14:57.631787 | orchestrator | 2025-03-23 13:14:57.632692 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:14:57.633864 | orchestrator | Sunday 23 March 2025 13:14:57 +0000 (0:00:00.122) 0:00:01.377 ********** 2025-03-23 13:14:57.719129 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:14:57.719284 | orchestrator | 2025-03-23 13:14:57.719388 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:14:57.720399 | orchestrator | Sunday 23 March 2025 13:14:57 +0000 (0:00:00.090) 0:00:01.468 ********** 2025-03-23 13:14:58.417804 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:14:58.418117 | orchestrator | 2025-03-23 13:14:58.418611 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:14:58.523501 | orchestrator | Sunday 23 March 2025 13:14:58 +0000 (0:00:00.699) 0:00:02.168 ********** 2025-03-23 13:14:58.523576 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:14:58.524066 | orchestrator | 2025-03-23 13:14:58.524093 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:14:58.524113 | orchestrator | 2025-03-23 13:14:58.524395 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:14:58.524772 | orchestrator | Sunday 23 March 2025 13:14:58 +0000 (0:00:00.102) 0:00:02.270 ********** 2025-03-23 13:14:58.620339 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:14:58.621448 | orchestrator | 2025-03-23 13:14:58.623436 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:14:59.446114 | orchestrator | Sunday 23 March 2025 13:14:58 +0000 (0:00:00.099) 0:00:02.370 ********** 2025-03-23 13:14:59.446252 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:14:59.447334 | orchestrator | 2025-03-23 13:14:59.448675 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:14:59.449828 | orchestrator | Sunday 23 March 2025 13:14:59 +0000 (0:00:00.825) 0:00:03.195 ********** 2025-03-23 13:14:59.580273 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:14:59.581426 | orchestrator | 2025-03-23 13:14:59.582614 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:14:59.583360 | orchestrator | 2025-03-23 13:14:59.584107 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:14:59.585151 | orchestrator | Sunday 23 March 2025 13:14:59 +0000 (0:00:00.134) 0:00:03.329 ********** 2025-03-23 13:14:59.735861 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:14:59.736990 | orchestrator | 2025-03-23 13:14:59.737823 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:14:59.739728 | orchestrator | Sunday 23 March 2025 13:14:59 +0000 (0:00:00.150) 0:00:03.479 ********** 2025-03-23 13:15:00.470691 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:15:00.471203 | orchestrator | 2025-03-23 13:15:00.471255 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:15:00.471497 | orchestrator | Sunday 23 March 2025 13:15:00 +0000 (0:00:00.739) 0:00:04.219 ********** 2025-03-23 13:15:00.591036 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:15:00.591645 | orchestrator | 2025-03-23 13:15:00.592614 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:15:00.593501 | orchestrator | 2025-03-23 13:15:00.595778 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:15:00.595846 | orchestrator | Sunday 23 March 2025 13:15:00 +0000 (0:00:00.119) 0:00:04.338 ********** 2025-03-23 13:15:00.700297 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:15:00.701396 | orchestrator | 2025-03-23 13:15:00.701440 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:15:00.701624 | orchestrator | Sunday 23 March 2025 13:15:00 +0000 (0:00:00.111) 0:00:04.449 ********** 2025-03-23 13:15:01.386624 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:15:01.387266 | orchestrator | 2025-03-23 13:15:01.389474 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:15:01.521336 | orchestrator | Sunday 23 March 2025 13:15:01 +0000 (0:00:00.685) 0:00:05.135 ********** 2025-03-23 13:15:01.521425 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:15:01.521667 | orchestrator | 2025-03-23 13:15:01.524393 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-23 13:15:01.524832 | orchestrator | 2025-03-23 13:15:01.524863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-23 13:15:01.525372 | orchestrator | Sunday 23 March 2025 13:15:01 +0000 (0:00:00.130) 0:00:05.265 ********** 2025-03-23 13:15:01.625819 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:15:01.626402 | orchestrator | 2025-03-23 13:15:01.626846 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-23 13:15:01.627307 | orchestrator | Sunday 23 March 2025 13:15:01 +0000 (0:00:00.109) 0:00:05.375 ********** 2025-03-23 13:15:02.422463 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:15:02.422637 | orchestrator | 2025-03-23 13:15:02.423300 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-23 13:15:02.424173 | orchestrator | Sunday 23 March 2025 13:15:02 +0000 (0:00:00.796) 0:00:06.171 ********** 2025-03-23 13:15:02.454364 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:15:02.455139 | orchestrator | 2025-03-23 13:15:02.456413 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:15:02.457584 | orchestrator | 2025-03-23 13:15:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:15:02.459058 | orchestrator | 2025-03-23 13:15:02 | INFO  | Please wait and do not abort execution. 2025-03-23 13:15:02.459098 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.459233 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.460373 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.461125 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.462064 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.462676 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:15:02.463060 | orchestrator | 2025-03-23 13:15:02.463540 | orchestrator | Sunday 23 March 2025 13:15:02 +0000 (0:00:00.031) 0:00:06.203 ********** 2025-03-23 13:15:02.464614 | orchestrator | =============================================================================== 2025-03-23 13:15:02.464713 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.75s 2025-03-23 13:15:02.465419 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-03-23 13:15:02.465752 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2025-03-23 13:15:03.072684 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-23 13:15:04.695857 | orchestrator | 2025-03-23 13:15:04 | INFO  | Task 8ccdb541-32c2-4f52-af78-79018049582e (wait-for-connection) was prepared for execution. 2025-03-23 13:15:08.008530 | orchestrator | 2025-03-23 13:15:04 | INFO  | It takes a moment until task 8ccdb541-32c2-4f52-af78-79018049582e (wait-for-connection) has been started and output is visible here. 2025-03-23 13:15:08.008638 | orchestrator | 2025-03-23 13:15:08.008701 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-23 13:15:08.009331 | orchestrator | 2025-03-23 13:15:08.010176 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-23 13:15:08.012482 | orchestrator | Sunday 23 March 2025 13:15:08 +0000 (0:00:00.192) 0:00:00.192 ********** 2025-03-23 13:15:21.403977 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:15:21.404932 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:15:21.404962 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:15:21.404982 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:15:21.406180 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:15:21.406741 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:15:21.407292 | orchestrator | 2025-03-23 13:15:21.407837 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:15:21.408210 | orchestrator | 2025-03-23 13:15:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:15:21.408331 | orchestrator | 2025-03-23 13:15:21 | INFO  | Please wait and do not abort execution. 2025-03-23 13:15:21.409282 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.409680 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.410146 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.410571 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.411094 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.411483 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:21.411860 | orchestrator | 2025-03-23 13:15:21.412326 | orchestrator | Sunday 23 March 2025 13:15:21 +0000 (0:00:13.394) 0:00:13.586 ********** 2025-03-23 13:15:21.412859 | orchestrator | =============================================================================== 2025-03-23 13:15:21.413499 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.39s 2025-03-23 13:15:22.019489 | orchestrator | + osism apply hddtemp 2025-03-23 13:15:23.546295 | orchestrator | 2025-03-23 13:15:23 | INFO  | Task 94a49848-d06d-4fbc-bc4f-4d4eb33bef29 (hddtemp) was prepared for execution. 2025-03-23 13:15:26.967345 | orchestrator | 2025-03-23 13:15:23 | INFO  | It takes a moment until task 94a49848-d06d-4fbc-bc4f-4d4eb33bef29 (hddtemp) has been started and output is visible here. 2025-03-23 13:15:26.967488 | orchestrator | 2025-03-23 13:15:26.967633 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-23 13:15:26.967664 | orchestrator | 2025-03-23 13:15:26.968390 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-23 13:15:26.968435 | orchestrator | Sunday 23 March 2025 13:15:26 +0000 (0:00:00.235) 0:00:00.235 ********** 2025-03-23 13:15:27.143085 | orchestrator | ok: [testbed-manager] 2025-03-23 13:15:27.227956 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:15:27.309424 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:15:27.394984 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:15:27.476489 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:15:27.750100 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:15:27.751165 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:15:27.755263 | orchestrator | 2025-03-23 13:15:29.038196 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-23 13:15:29.038968 | orchestrator | Sunday 23 March 2025 13:15:27 +0000 (0:00:00.782) 0:00:01.017 ********** 2025-03-23 13:15:29.039042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:15:29.039132 | orchestrator | 2025-03-23 13:15:29.039791 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-23 13:15:29.040955 | orchestrator | Sunday 23 March 2025 13:15:29 +0000 (0:00:01.284) 0:00:02.303 ********** 2025-03-23 13:15:31.186989 | orchestrator | ok: [testbed-manager] 2025-03-23 13:15:31.191291 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:15:31.193084 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:15:31.193831 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:15:31.194342 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:15:31.194769 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:15:31.195525 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:15:31.195812 | orchestrator | 2025-03-23 13:15:31.196398 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-23 13:15:31.197089 | orchestrator | Sunday 23 March 2025 13:15:31 +0000 (0:00:02.152) 0:00:04.455 ********** 2025-03-23 13:15:31.760791 | orchestrator | changed: [testbed-manager] 2025-03-23 13:15:31.845477 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:15:32.423315 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:15:32.425580 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:15:32.426360 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:15:32.426406 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:15:32.427918 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:15:32.428504 | orchestrator | 2025-03-23 13:15:32.429040 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-23 13:15:32.430930 | orchestrator | Sunday 23 March 2025 13:15:32 +0000 (0:00:01.229) 0:00:05.684 ********** 2025-03-23 13:15:33.889780 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:15:33.890430 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:15:33.891217 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:15:33.891254 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:15:33.892282 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:15:33.893408 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:15:33.894108 | orchestrator | ok: [testbed-manager] 2025-03-23 13:15:33.894665 | orchestrator | 2025-03-23 13:15:33.895595 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-23 13:15:33.897264 | orchestrator | Sunday 23 March 2025 13:15:33 +0000 (0:00:01.469) 0:00:07.154 ********** 2025-03-23 13:15:34.155289 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:15:34.250843 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:15:34.359420 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:15:34.463475 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:15:34.602277 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:15:34.602972 | orchestrator | changed: [testbed-manager] 2025-03-23 13:15:34.603596 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:15:34.604528 | orchestrator | 2025-03-23 13:15:34.605449 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-23 13:15:34.606499 | orchestrator | Sunday 23 March 2025 13:15:34 +0000 (0:00:00.718) 0:00:07.873 ********** 2025-03-23 13:15:48.396655 | orchestrator | changed: [testbed-manager] 2025-03-23 13:15:48.397386 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:15:48.397429 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:15:48.399788 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:15:48.400249 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:15:48.401333 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:15:48.402723 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:15:48.403819 | orchestrator | 2025-03-23 13:15:48.405171 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-23 13:15:48.405777 | orchestrator | Sunday 23 March 2025 13:15:48 +0000 (0:00:13.785) 0:00:21.659 ********** 2025-03-23 13:15:49.677811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:15:49.678597 | orchestrator | 2025-03-23 13:15:49.679586 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-23 13:15:49.680296 | orchestrator | Sunday 23 March 2025 13:15:49 +0000 (0:00:01.287) 0:00:22.946 ********** 2025-03-23 13:15:51.536620 | orchestrator | changed: [testbed-manager] 2025-03-23 13:15:51.537108 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:15:51.538262 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:15:51.539284 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:15:51.540139 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:15:51.540704 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:15:51.541455 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:15:51.542370 | orchestrator | 2025-03-23 13:15:51.542672 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:15:51.543117 | orchestrator | 2025-03-23 13:15:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:15:51.543740 | orchestrator | 2025-03-23 13:15:51 | INFO  | Please wait and do not abort execution. 2025-03-23 13:15:51.543773 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:15:51.545222 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.545573 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.545598 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.545613 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.545631 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.546130 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:15:51.547143 | orchestrator | 2025-03-23 13:15:51.547175 | orchestrator | Sunday 23 March 2025 13:15:51 +0000 (0:00:01.859) 0:00:24.805 ********** 2025-03-23 13:15:51.547970 | orchestrator | =============================================================================== 2025-03-23 13:15:51.548712 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.79s 2025-03-23 13:15:51.549013 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.15s 2025-03-23 13:15:51.549225 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.86s 2025-03-23 13:15:51.549486 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.47s 2025-03-23 13:15:51.549950 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2025-03-23 13:15:51.550317 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2025-03-23 13:15:51.550833 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.23s 2025-03-23 13:15:51.551267 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.78s 2025-03-23 13:15:51.551489 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.72s 2025-03-23 13:15:52.199671 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-23 13:15:53.492776 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-23 13:15:53.493400 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-23 13:15:53.493434 | orchestrator | + local max_attempts=60 2025-03-23 13:15:53.493451 | orchestrator | + local name=ceph-ansible 2025-03-23 13:15:53.493467 | orchestrator | + local attempt_num=1 2025-03-23 13:15:53.493488 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-23 13:15:53.531438 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:15:53.532641 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-23 13:15:53.532669 | orchestrator | + local max_attempts=60 2025-03-23 13:15:53.532684 | orchestrator | + local name=kolla-ansible 2025-03-23 13:15:53.532699 | orchestrator | + local attempt_num=1 2025-03-23 13:15:53.532719 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-23 13:15:53.577896 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:15:53.578143 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-23 13:15:53.578176 | orchestrator | + local max_attempts=60 2025-03-23 13:15:53.578192 | orchestrator | + local name=osism-ansible 2025-03-23 13:15:53.578207 | orchestrator | + local attempt_num=1 2025-03-23 13:15:53.578258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-23 13:15:53.606234 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-23 13:15:53.995076 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-23 13:15:53.995174 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-23 13:15:53.995207 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-23 13:15:54.414506 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-23 13:15:54.741651 | orchestrator | ARA in osism-ansible already disabled. 2025-03-23 13:15:55.099210 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-23 13:15:55.100131 | orchestrator | + osism apply gather-facts 2025-03-23 13:15:56.638241 | orchestrator | 2025-03-23 13:15:56 | INFO  | Task 4ddb9eee-6429-46d3-9528-f81de016b9ae (gather-facts) was prepared for execution. 2025-03-23 13:15:59.928723 | orchestrator | 2025-03-23 13:15:56 | INFO  | It takes a moment until task 4ddb9eee-6429-46d3-9528-f81de016b9ae (gather-facts) has been started and output is visible here. 2025-03-23 13:15:59.928856 | orchestrator | 2025-03-23 13:15:59.929427 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-23 13:15:59.934338 | orchestrator | 2025-03-23 13:15:59.935923 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:15:59.936221 | orchestrator | Sunday 23 March 2025 13:15:59 +0000 (0:00:00.167) 0:00:00.167 ********** 2025-03-23 13:16:05.176514 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:16:05.178605 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:16:05.179516 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:16:05.181111 | orchestrator | ok: [testbed-manager] 2025-03-23 13:16:05.182106 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:16:05.184400 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:16:05.185318 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:16:05.185347 | orchestrator | 2025-03-23 13:16:05.185363 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-23 13:16:05.185384 | orchestrator | 2025-03-23 13:16:05.186093 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-23 13:16:05.186439 | orchestrator | Sunday 23 March 2025 13:16:05 +0000 (0:00:05.253) 0:00:05.420 ********** 2025-03-23 13:16:05.391165 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:16:05.477201 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:16:05.573843 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:16:05.661377 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:16:05.746725 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:16:05.796196 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:16:05.796837 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:16:05.797948 | orchestrator | 2025-03-23 13:16:05.801185 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:16:05.801237 | orchestrator | 2025-03-23 13:16:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:16:05.803154 | orchestrator | 2025-03-23 13:16:05 | INFO  | Please wait and do not abort execution. 2025-03-23 13:16:05.803191 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.806093 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.807990 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.811673 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.812894 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.814786 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.814851 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:16:05.818226 | orchestrator | 2025-03-23 13:16:05.819288 | orchestrator | Sunday 23 March 2025 13:16:05 +0000 (0:00:00.618) 0:00:06.038 ********** 2025-03-23 13:16:05.819692 | orchestrator | =============================================================================== 2025-03-23 13:16:05.819723 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.25s 2025-03-23 13:16:05.819879 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-03-23 13:16:06.475294 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-23 13:16:06.491805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-23 13:16:06.508663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-23 13:16:06.527910 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-23 13:16:06.541160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-23 13:16:06.559516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-23 13:16:06.579774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-23 13:16:06.599815 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-23 13:16:06.621940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-23 13:16:06.643592 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-23 13:16:06.662409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-23 13:16:06.678536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-23 13:16:06.693594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-23 13:16:06.706274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-23 13:16:06.718709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-23 13:16:06.731353 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-23 13:16:06.747876 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-23 13:16:06.766730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-23 13:16:06.785560 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-23 13:16:06.809656 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-23 13:16:06.823712 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-23 13:16:06.923971 | orchestrator | changed 2025-03-23 13:16:06.973759 | 2025-03-23 13:16:06.973875 | TASK [Deploy services] 2025-03-23 13:16:07.081852 | orchestrator | skipping: Conditional result was False 2025-03-23 13:16:07.101006 | 2025-03-23 13:16:07.101131 | TASK [Deploy in a nutshell] 2025-03-23 13:16:07.792372 | orchestrator | + set -e 2025-03-23 13:16:07.792646 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-23 13:16:07.792682 | orchestrator | ++ export INTERACTIVE=false 2025-03-23 13:16:07.792700 | orchestrator | ++ INTERACTIVE=false 2025-03-23 13:16:07.792744 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-23 13:16:07.792762 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-23 13:16:07.792778 | orchestrator | + source /opt/manager-vars.sh 2025-03-23 13:16:07.792802 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-23 13:16:07.792826 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-23 13:16:07.792843 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-23 13:16:07.792857 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-23 13:16:07.792871 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-23 13:16:07.792885 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-23 13:16:07.792899 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-23 13:16:07.792914 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-23 13:16:07.792928 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-23 13:16:07.792943 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-23 13:16:07.792957 | orchestrator | ++ export ARA=false 2025-03-23 13:16:07.792971 | orchestrator | ++ ARA=false 2025-03-23 13:16:07.792985 | orchestrator | ++ export TEMPEST=false 2025-03-23 13:16:07.792998 | orchestrator | ++ TEMPEST=false 2025-03-23 13:16:07.793012 | orchestrator | ++ export IS_ZUUL=true 2025-03-23 13:16:07.793026 | orchestrator | ++ IS_ZUUL=true 2025-03-23 13:16:07.793104 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 13:16:07.793122 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.177 2025-03-23 13:16:07.793137 | orchestrator | ++ export EXTERNAL_API=false 2025-03-23 13:16:07.793151 | orchestrator | ++ EXTERNAL_API=false 2025-03-23 13:16:07.793164 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-23 13:16:07.793178 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-23 13:16:07.793192 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-23 13:16:07.793207 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-23 13:16:07.793221 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-23 13:16:07.793243 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-23 13:16:07.793267 | orchestrator | 2025-03-23 13:16:07.850195 | orchestrator | # PULL IMAGES 2025-03-23 13:16:07.850250 | orchestrator | 2025-03-23 13:16:07.850265 | orchestrator | + echo 2025-03-23 13:16:07.850279 | orchestrator | + echo '# PULL IMAGES' 2025-03-23 13:16:07.850294 | orchestrator | + echo 2025-03-23 13:16:07.850309 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-23 13:16:07.850345 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-23 13:16:09.429838 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-23 13:16:09.429990 | orchestrator | 2025-03-23 13:16:09 | INFO  | Trying to run play pull-images in environment custom 2025-03-23 13:16:09.484489 | orchestrator | 2025-03-23 13:16:09 | INFO  | Task 73a66078-5470-45b6-a303-fa3d1396956d (pull-images) was prepared for execution. 2025-03-23 13:16:12.704993 | orchestrator | 2025-03-23 13:16:09 | INFO  | It takes a moment until task 73a66078-5470-45b6-a303-fa3d1396956d (pull-images) has been started and output is visible here. 2025-03-23 13:16:12.705106 | orchestrator | 2025-03-23 13:16:12.705462 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-23 13:16:12.706222 | orchestrator | 2025-03-23 13:16:12.706700 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-23 13:16:12.706727 | orchestrator | Sunday 23 March 2025 13:16:12 +0000 (0:00:00.148) 0:00:00.148 ********** 2025-03-23 13:16:53.663680 | orchestrator | changed: [testbed-manager] 2025-03-23 13:17:43.332302 | orchestrator | 2025-03-23 13:17:43.332461 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-23 13:17:43.332484 | orchestrator | Sunday 23 March 2025 13:16:53 +0000 (0:00:40.954) 0:00:41.102 ********** 2025-03-23 13:17:43.332516 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-23 13:17:43.332943 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-23 13:17:43.334757 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-23 13:17:43.336221 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-23 13:17:43.336271 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-23 13:17:43.337240 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-23 13:17:43.338012 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-23 13:17:43.340854 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-23 13:17:43.346188 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-23 13:17:43.346443 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-23 13:17:43.346567 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-23 13:17:43.346604 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-23 13:17:43.347448 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-23 13:17:43.348350 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-23 13:17:43.349054 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-23 13:17:43.349703 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-23 13:17:43.350413 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-23 13:17:43.351280 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-23 13:17:43.351696 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-23 13:17:43.352507 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-23 13:17:43.352718 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-23 13:17:43.353335 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-23 13:17:43.353519 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-23 13:17:43.354395 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-23 13:17:43.354805 | orchestrator | 2025-03-23 13:17:43.355916 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:17:43.356209 | orchestrator | 2025-03-23 13:17:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:17:43.356234 | orchestrator | 2025-03-23 13:17:43 | INFO  | Please wait and do not abort execution. 2025-03-23 13:17:43.356255 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:17:43.356710 | orchestrator | 2025-03-23 13:17:43.357013 | orchestrator | Sunday 23 March 2025 13:17:43 +0000 (0:00:49.671) 0:01:30.774 ********** 2025-03-23 13:17:43.357459 | orchestrator | =============================================================================== 2025-03-23 13:17:43.357826 | orchestrator | Pull other images ------------------------------------------------------ 49.67s 2025-03-23 13:17:43.358387 | orchestrator | Pull keystone image ---------------------------------------------------- 40.95s 2025-03-23 13:17:45.528618 | orchestrator | 2025-03-23 13:17:45 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-23 13:17:45.583410 | orchestrator | 2025-03-23 13:17:45 | INFO  | Task 06011e0d-e7d4-4ebd-af62-719bdb1bbbbb (wipe-partitions) was prepared for execution. 2025-03-23 13:17:49.005489 | orchestrator | 2025-03-23 13:17:45 | INFO  | It takes a moment until task 06011e0d-e7d4-4ebd-af62-719bdb1bbbbb (wipe-partitions) has been started and output is visible here. 2025-03-23 13:17:49.005631 | orchestrator | 2025-03-23 13:17:49.006563 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-23 13:17:49.007129 | orchestrator | 2025-03-23 13:17:49.007949 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-23 13:17:49.008460 | orchestrator | Sunday 23 March 2025 13:17:48 +0000 (0:00:00.128) 0:00:00.128 ********** 2025-03-23 13:17:49.690662 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:17:49.692004 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:17:49.692531 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:17:49.692899 | orchestrator | 2025-03-23 13:17:49.693784 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-23 13:17:49.694063 | orchestrator | Sunday 23 March 2025 13:17:49 +0000 (0:00:00.684) 0:00:00.813 ********** 2025-03-23 13:17:49.865327 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:17:49.965577 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:17:49.966189 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:17:49.971870 | orchestrator | 2025-03-23 13:17:49.972676 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-23 13:17:49.972709 | orchestrator | Sunday 23 March 2025 13:17:49 +0000 (0:00:00.277) 0:00:01.091 ********** 2025-03-23 13:17:50.795269 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:17:50.797557 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:17:50.797614 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:17:50.798261 | orchestrator | 2025-03-23 13:17:50.798299 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-23 13:17:50.798620 | orchestrator | Sunday 23 March 2025 13:17:50 +0000 (0:00:00.826) 0:00:01.917 ********** 2025-03-23 13:17:51.004388 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:17:51.133881 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:17:51.134667 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:17:51.134707 | orchestrator | 2025-03-23 13:17:51.135857 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-23 13:17:51.139484 | orchestrator | Sunday 23 March 2025 13:17:51 +0000 (0:00:00.340) 0:00:02.257 ********** 2025-03-23 13:17:52.401920 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-23 13:17:52.404747 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-23 13:17:52.407273 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-23 13:17:52.407380 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-23 13:17:52.408340 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-23 13:17:52.409317 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-23 13:17:52.409739 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-23 13:17:52.410779 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-23 13:17:52.412174 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-23 13:17:52.415482 | orchestrator | 2025-03-23 13:17:52.417862 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-23 13:17:52.418801 | orchestrator | Sunday 23 March 2025 13:17:52 +0000 (0:00:01.269) 0:00:03.527 ********** 2025-03-23 13:17:53.838212 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-23 13:17:53.839751 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-23 13:17:53.839865 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-23 13:17:53.839885 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-23 13:17:53.839915 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-23 13:17:53.840198 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-23 13:17:53.840229 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-23 13:17:53.840523 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-23 13:17:53.840825 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-23 13:17:53.841253 | orchestrator | 2025-03-23 13:17:53.844392 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-23 13:17:53.844655 | orchestrator | Sunday 23 March 2025 13:17:53 +0000 (0:00:01.433) 0:00:04.961 ********** 2025-03-23 13:17:56.349843 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-23 13:17:56.353312 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-23 13:17:56.353360 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-23 13:17:56.353598 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-23 13:17:56.353647 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-23 13:17:56.354010 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-23 13:17:56.354358 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-23 13:17:56.354995 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-23 13:17:56.355207 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-23 13:17:56.355618 | orchestrator | 2025-03-23 13:17:56.355910 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-23 13:17:56.356290 | orchestrator | Sunday 23 March 2025 13:17:56 +0000 (0:00:02.509) 0:00:07.470 ********** 2025-03-23 13:17:56.982612 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:17:56.982819 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:17:56.982917 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:17:56.983352 | orchestrator | 2025-03-23 13:17:56.983675 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-23 13:17:56.984238 | orchestrator | Sunday 23 March 2025 13:17:56 +0000 (0:00:00.639) 0:00:08.109 ********** 2025-03-23 13:17:57.613463 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:17:57.613608 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:17:57.614874 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:17:57.615609 | orchestrator | 2025-03-23 13:17:57.618094 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:17:57.618202 | orchestrator | 2025-03-23 13:17:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:17:57.618224 | orchestrator | 2025-03-23 13:17:57 | INFO  | Please wait and do not abort execution. 2025-03-23 13:17:57.618245 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:17:57.618570 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:17:57.618941 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:17:57.619351 | orchestrator | 2025-03-23 13:17:57.619575 | orchestrator | Sunday 23 March 2025 13:17:57 +0000 (0:00:00.630) 0:00:08.739 ********** 2025-03-23 13:17:57.619979 | orchestrator | =============================================================================== 2025-03-23 13:17:57.620393 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.51s 2025-03-23 13:17:57.620725 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.43s 2025-03-23 13:17:57.621290 | orchestrator | Check device availability ----------------------------------------------- 1.27s 2025-03-23 13:17:57.621476 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.83s 2025-03-23 13:17:57.622674 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.68s 2025-03-23 13:17:57.626818 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-03-23 13:17:57.626846 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-03-23 13:17:57.626866 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.34s 2025-03-23 13:18:00.187105 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-03-23 13:18:00.187272 | orchestrator | 2025-03-23 13:18:00 | INFO  | Task 96dd0ffe-6972-457d-ae8c-27ea1c7e3e94 (facts) was prepared for execution. 2025-03-23 13:18:03.433480 | orchestrator | 2025-03-23 13:18:00 | INFO  | It takes a moment until task 96dd0ffe-6972-457d-ae8c-27ea1c7e3e94 (facts) has been started and output is visible here. 2025-03-23 13:18:03.433606 | orchestrator | 2025-03-23 13:18:03.435078 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-23 13:18:03.436001 | orchestrator | 2025-03-23 13:18:03.436766 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-23 13:18:03.439464 | orchestrator | Sunday 23 March 2025 13:18:03 +0000 (0:00:00.216) 0:00:00.216 ********** 2025-03-23 13:18:04.546247 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:18:04.546578 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:18:04.547769 | orchestrator | ok: [testbed-manager] 2025-03-23 13:18:04.549129 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:18:04.550560 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:04.551392 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:04.552193 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:18:04.553064 | orchestrator | 2025-03-23 13:18:04.553878 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-23 13:18:04.554511 | orchestrator | Sunday 23 March 2025 13:18:04 +0000 (0:00:01.114) 0:00:01.330 ********** 2025-03-23 13:18:04.763501 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:18:04.891047 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:18:05.009411 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:18:05.120589 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:18:05.222614 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:06.020840 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:06.025409 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:06.029092 | orchestrator | 2025-03-23 13:18:06.030605 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-23 13:18:06.033201 | orchestrator | 2025-03-23 13:18:06.033609 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:18:06.034507 | orchestrator | Sunday 23 March 2025 13:18:06 +0000 (0:00:01.477) 0:00:02.808 ********** 2025-03-23 13:18:10.876541 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:18:10.876784 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:18:10.878714 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:18:10.878914 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:10.882689 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:18:10.883510 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:10.884756 | orchestrator | ok: [testbed-manager] 2025-03-23 13:18:10.885258 | orchestrator | 2025-03-23 13:18:10.886300 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-23 13:18:10.887019 | orchestrator | 2025-03-23 13:18:10.888332 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-23 13:18:10.890271 | orchestrator | Sunday 23 March 2025 13:18:10 +0000 (0:00:04.856) 0:00:07.664 ********** 2025-03-23 13:18:11.229196 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:18:11.322776 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:18:11.406818 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:18:11.490151 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:18:11.584007 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:11.629105 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:11.629342 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:11.630627 | orchestrator | 2025-03-23 13:18:11.631908 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:18:11.633446 | orchestrator | 2025-03-23 13:18:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:18:11.635212 | orchestrator | 2025-03-23 13:18:11 | INFO  | Please wait and do not abort execution. 2025-03-23 13:18:11.635281 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.635344 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.635785 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.636198 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.636528 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.636984 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.637552 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:18:11.637788 | orchestrator | 2025-03-23 13:18:11.638122 | orchestrator | Sunday 23 March 2025 13:18:11 +0000 (0:00:00.750) 0:00:08.415 ********** 2025-03-23 13:18:11.638501 | orchestrator | =============================================================================== 2025-03-23 13:18:11.638712 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2025-03-23 13:18:11.639385 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.48s 2025-03-23 13:18:11.640181 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-03-23 13:18:14.259797 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2025-03-23 13:18:14.259918 | orchestrator | 2025-03-23 13:18:14 | INFO  | Task 2640bec7-0580-44ae-9aac-b26f448becfd (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-23 13:18:18.071475 | orchestrator | 2025-03-23 13:18:14 | INFO  | It takes a moment until task 2640bec7-0580-44ae-9aac-b26f448becfd (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-23 13:18:18.071606 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:18:18.699693 | orchestrator | 2025-03-23 13:18:18.700291 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-23 13:18:18.701695 | orchestrator | 2025-03-23 13:18:18.701801 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:18:18.702289 | orchestrator | Sunday 23 March 2025 13:18:18 +0000 (0:00:00.528) 0:00:00.528 ********** 2025-03-23 13:18:18.953128 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-23 13:18:18.954740 | orchestrator | 2025-03-23 13:18:18.955139 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:18:18.955194 | orchestrator | Sunday 23 March 2025 13:18:18 +0000 (0:00:00.258) 0:00:00.787 ********** 2025-03-23 13:18:19.222303 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:19.222463 | orchestrator | 2025-03-23 13:18:19.222686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:19.222929 | orchestrator | Sunday 23 March 2025 13:18:19 +0000 (0:00:00.263) 0:00:01.051 ********** 2025-03-23 13:18:19.807135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-23 13:18:19.809448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-23 13:18:19.809486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-23 13:18:19.809883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-23 13:18:19.811128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-23 13:18:19.813296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-23 13:18:19.813687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-23 13:18:19.814094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-23 13:18:19.815021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-23 13:18:19.815221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-23 13:18:19.815675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-23 13:18:19.816061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-23 13:18:19.817451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-23 13:18:19.817816 | orchestrator | 2025-03-23 13:18:19.818704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:19.819220 | orchestrator | Sunday 23 March 2025 13:18:19 +0000 (0:00:00.587) 0:00:01.638 ********** 2025-03-23 13:18:20.015145 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:20.015336 | orchestrator | 2025-03-23 13:18:20.015363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:20.015637 | orchestrator | Sunday 23 March 2025 13:18:20 +0000 (0:00:00.210) 0:00:01.848 ********** 2025-03-23 13:18:20.231216 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:20.553968 | orchestrator | 2025-03-23 13:18:20.554134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:20.554156 | orchestrator | Sunday 23 March 2025 13:18:20 +0000 (0:00:00.210) 0:00:02.059 ********** 2025-03-23 13:18:20.554220 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:20.555931 | orchestrator | 2025-03-23 13:18:20.557482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:20.557862 | orchestrator | Sunday 23 March 2025 13:18:20 +0000 (0:00:00.325) 0:00:02.385 ********** 2025-03-23 13:18:20.764147 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:20.765548 | orchestrator | 2025-03-23 13:18:20.964622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:20.964693 | orchestrator | Sunday 23 March 2025 13:18:20 +0000 (0:00:00.213) 0:00:02.598 ********** 2025-03-23 13:18:20.964719 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:20.965266 | orchestrator | 2025-03-23 13:18:20.965778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:20.966768 | orchestrator | Sunday 23 March 2025 13:18:20 +0000 (0:00:00.199) 0:00:02.798 ********** 2025-03-23 13:18:21.173930 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:21.174971 | orchestrator | 2025-03-23 13:18:21.369747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:21.370593 | orchestrator | Sunday 23 March 2025 13:18:21 +0000 (0:00:00.208) 0:00:03.007 ********** 2025-03-23 13:18:21.370645 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:21.370706 | orchestrator | 2025-03-23 13:18:21.370730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:21.371283 | orchestrator | Sunday 23 March 2025 13:18:21 +0000 (0:00:00.196) 0:00:03.203 ********** 2025-03-23 13:18:21.580797 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:21.581392 | orchestrator | 2025-03-23 13:18:21.581937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:21.582084 | orchestrator | Sunday 23 March 2025 13:18:21 +0000 (0:00:00.208) 0:00:03.411 ********** 2025-03-23 13:18:22.236924 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48) 2025-03-23 13:18:22.237784 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48) 2025-03-23 13:18:22.238257 | orchestrator | 2025-03-23 13:18:22.238681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:22.239301 | orchestrator | Sunday 23 March 2025 13:18:22 +0000 (0:00:00.657) 0:00:04.069 ********** 2025-03-23 13:18:23.146435 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754) 2025-03-23 13:18:23.146627 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754) 2025-03-23 13:18:23.147047 | orchestrator | 2025-03-23 13:18:23.148284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:23.628529 | orchestrator | Sunday 23 March 2025 13:18:23 +0000 (0:00:00.908) 0:00:04.977 ********** 2025-03-23 13:18:23.628639 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b) 2025-03-23 13:18:23.628938 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b) 2025-03-23 13:18:23.629370 | orchestrator | 2025-03-23 13:18:23.629847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:23.630662 | orchestrator | Sunday 23 March 2025 13:18:23 +0000 (0:00:00.478) 0:00:05.456 ********** 2025-03-23 13:18:24.087012 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992) 2025-03-23 13:18:24.087356 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992) 2025-03-23 13:18:24.087770 | orchestrator | 2025-03-23 13:18:24.088219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:24.088577 | orchestrator | Sunday 23 March 2025 13:18:24 +0000 (0:00:00.460) 0:00:05.917 ********** 2025-03-23 13:18:24.566788 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:18:24.566995 | orchestrator | 2025-03-23 13:18:24.568396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:24.568776 | orchestrator | Sunday 23 March 2025 13:18:24 +0000 (0:00:00.483) 0:00:06.400 ********** 2025-03-23 13:18:25.185487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-23 13:18:25.185956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-23 13:18:25.185994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-23 13:18:25.189081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-23 13:18:25.189583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-23 13:18:25.190094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-23 13:18:25.192360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-23 13:18:25.192390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-23 13:18:25.192755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-23 13:18:25.193278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-23 13:18:25.193444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-23 13:18:25.193719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-23 13:18:25.194946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-23 13:18:25.195374 | orchestrator | 2025-03-23 13:18:25.195473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:25.196125 | orchestrator | Sunday 23 March 2025 13:18:25 +0000 (0:00:00.613) 0:00:07.014 ********** 2025-03-23 13:18:25.503109 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:25.503373 | orchestrator | 2025-03-23 13:18:25.505041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:25.746391 | orchestrator | Sunday 23 March 2025 13:18:25 +0000 (0:00:00.320) 0:00:07.334 ********** 2025-03-23 13:18:25.746491 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:25.746533 | orchestrator | 2025-03-23 13:18:25.746876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:25.747335 | orchestrator | Sunday 23 March 2025 13:18:25 +0000 (0:00:00.247) 0:00:07.581 ********** 2025-03-23 13:18:25.978632 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:25.978763 | orchestrator | 2025-03-23 13:18:25.979137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:25.980109 | orchestrator | Sunday 23 March 2025 13:18:25 +0000 (0:00:00.226) 0:00:07.808 ********** 2025-03-23 13:18:26.175467 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:26.175637 | orchestrator | 2025-03-23 13:18:26.175698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:26.176517 | orchestrator | Sunday 23 March 2025 13:18:26 +0000 (0:00:00.198) 0:00:08.007 ********** 2025-03-23 13:18:26.671294 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:26.676316 | orchestrator | 2025-03-23 13:18:26.857665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:26.857720 | orchestrator | Sunday 23 March 2025 13:18:26 +0000 (0:00:00.494) 0:00:08.502 ********** 2025-03-23 13:18:26.857743 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:26.858601 | orchestrator | 2025-03-23 13:18:27.093040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:27.093111 | orchestrator | Sunday 23 March 2025 13:18:26 +0000 (0:00:00.190) 0:00:08.692 ********** 2025-03-23 13:18:27.093166 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:27.093277 | orchestrator | 2025-03-23 13:18:27.093298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:27.275511 | orchestrator | Sunday 23 March 2025 13:18:27 +0000 (0:00:00.230) 0:00:08.923 ********** 2025-03-23 13:18:27.275578 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:27.278691 | orchestrator | 2025-03-23 13:18:27.279389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:27.279661 | orchestrator | Sunday 23 March 2025 13:18:27 +0000 (0:00:00.187) 0:00:09.110 ********** 2025-03-23 13:18:28.095968 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-23 13:18:28.096133 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-23 13:18:28.096160 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-23 13:18:28.096304 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-23 13:18:28.097313 | orchestrator | 2025-03-23 13:18:28.097553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:28.099405 | orchestrator | Sunday 23 March 2025 13:18:28 +0000 (0:00:00.818) 0:00:09.928 ********** 2025-03-23 13:18:28.408572 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:28.412451 | orchestrator | 2025-03-23 13:18:28.414761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:28.415002 | orchestrator | Sunday 23 March 2025 13:18:28 +0000 (0:00:00.314) 0:00:10.242 ********** 2025-03-23 13:18:28.650406 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:28.651742 | orchestrator | 2025-03-23 13:18:28.651955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:28.651992 | orchestrator | Sunday 23 March 2025 13:18:28 +0000 (0:00:00.239) 0:00:10.482 ********** 2025-03-23 13:18:28.888703 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:28.890759 | orchestrator | 2025-03-23 13:18:28.891948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:28.892450 | orchestrator | Sunday 23 March 2025 13:18:28 +0000 (0:00:00.231) 0:00:10.714 ********** 2025-03-23 13:18:29.221437 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:29.222292 | orchestrator | 2025-03-23 13:18:29.225455 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-23 13:18:29.226708 | orchestrator | Sunday 23 March 2025 13:18:29 +0000 (0:00:00.336) 0:00:11.051 ********** 2025-03-23 13:18:29.448224 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-23 13:18:29.449309 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-23 13:18:29.450380 | orchestrator | 2025-03-23 13:18:29.450907 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-23 13:18:29.452876 | orchestrator | Sunday 23 March 2025 13:18:29 +0000 (0:00:00.227) 0:00:11.278 ********** 2025-03-23 13:18:29.628755 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:29.630591 | orchestrator | 2025-03-23 13:18:29.633769 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-23 13:18:29.636746 | orchestrator | Sunday 23 March 2025 13:18:29 +0000 (0:00:00.181) 0:00:11.459 ********** 2025-03-23 13:18:30.051877 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:30.053087 | orchestrator | 2025-03-23 13:18:30.054386 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-23 13:18:30.055359 | orchestrator | Sunday 23 March 2025 13:18:30 +0000 (0:00:00.420) 0:00:11.880 ********** 2025-03-23 13:18:30.209136 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:30.210917 | orchestrator | 2025-03-23 13:18:30.212276 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-23 13:18:30.213343 | orchestrator | Sunday 23 March 2025 13:18:30 +0000 (0:00:00.160) 0:00:12.040 ********** 2025-03-23 13:18:30.374707 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:30.375614 | orchestrator | 2025-03-23 13:18:30.376471 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-23 13:18:30.377059 | orchestrator | Sunday 23 March 2025 13:18:30 +0000 (0:00:00.161) 0:00:12.202 ********** 2025-03-23 13:18:30.662055 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8229b7a0-df8d-5815-8245-22e3d24081aa'}}) 2025-03-23 13:18:30.663604 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0ab6ed36-da2c-5faf-8aed-224e80357d25'}}) 2025-03-23 13:18:30.664715 | orchestrator | 2025-03-23 13:18:30.666061 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-23 13:18:30.667297 | orchestrator | Sunday 23 March 2025 13:18:30 +0000 (0:00:00.289) 0:00:12.492 ********** 2025-03-23 13:18:30.879051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8229b7a0-df8d-5815-8245-22e3d24081aa'}})  2025-03-23 13:18:30.880700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0ab6ed36-da2c-5faf-8aed-224e80357d25'}})  2025-03-23 13:18:30.882766 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:30.883720 | orchestrator | 2025-03-23 13:18:30.884786 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-23 13:18:30.885969 | orchestrator | Sunday 23 March 2025 13:18:30 +0000 (0:00:00.220) 0:00:12.713 ********** 2025-03-23 13:18:31.073284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8229b7a0-df8d-5815-8245-22e3d24081aa'}})  2025-03-23 13:18:31.074302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0ab6ed36-da2c-5faf-8aed-224e80357d25'}})  2025-03-23 13:18:31.075072 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:31.076265 | orchestrator | 2025-03-23 13:18:31.077535 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-23 13:18:31.078602 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.191) 0:00:12.904 ********** 2025-03-23 13:18:31.263981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8229b7a0-df8d-5815-8245-22e3d24081aa'}})  2025-03-23 13:18:31.265773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0ab6ed36-da2c-5faf-8aed-224e80357d25'}})  2025-03-23 13:18:31.266103 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:31.266837 | orchestrator | 2025-03-23 13:18:31.268236 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-23 13:18:31.271742 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.193) 0:00:13.098 ********** 2025-03-23 13:18:31.447849 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:31.448381 | orchestrator | 2025-03-23 13:18:31.449493 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-23 13:18:31.449970 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.182) 0:00:13.280 ********** 2025-03-23 13:18:31.612838 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:18:31.616363 | orchestrator | 2025-03-23 13:18:31.616852 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-23 13:18:31.620671 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.162) 0:00:13.443 ********** 2025-03-23 13:18:31.769282 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:31.772424 | orchestrator | 2025-03-23 13:18:31.773224 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-23 13:18:31.773259 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.153) 0:00:13.597 ********** 2025-03-23 13:18:31.941679 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:31.942413 | orchestrator | 2025-03-23 13:18:31.944518 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-23 13:18:31.945059 | orchestrator | Sunday 23 March 2025 13:18:31 +0000 (0:00:00.173) 0:00:13.771 ********** 2025-03-23 13:18:32.333170 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:32.333360 | orchestrator | 2025-03-23 13:18:32.333383 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-23 13:18:32.333405 | orchestrator | Sunday 23 March 2025 13:18:32 +0000 (0:00:00.390) 0:00:14.161 ********** 2025-03-23 13:18:32.497314 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:18:32.497996 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:18:32.501520 | orchestrator |  "sdb": { 2025-03-23 13:18:32.501970 | orchestrator |  "osd_lvm_uuid": "8229b7a0-df8d-5815-8245-22e3d24081aa" 2025-03-23 13:18:32.502909 | orchestrator |  }, 2025-03-23 13:18:32.503537 | orchestrator |  "sdc": { 2025-03-23 13:18:32.504482 | orchestrator |  "osd_lvm_uuid": "0ab6ed36-da2c-5faf-8aed-224e80357d25" 2025-03-23 13:18:32.505083 | orchestrator |  } 2025-03-23 13:18:32.505537 | orchestrator |  } 2025-03-23 13:18:32.506528 | orchestrator | } 2025-03-23 13:18:32.507245 | orchestrator | 2025-03-23 13:18:32.508074 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-23 13:18:32.508964 | orchestrator | Sunday 23 March 2025 13:18:32 +0000 (0:00:00.165) 0:00:14.327 ********** 2025-03-23 13:18:32.641339 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:32.642841 | orchestrator | 2025-03-23 13:18:32.644311 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-23 13:18:32.785912 | orchestrator | Sunday 23 March 2025 13:18:32 +0000 (0:00:00.144) 0:00:14.472 ********** 2025-03-23 13:18:32.785960 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:32.786992 | orchestrator | 2025-03-23 13:18:32.789433 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-23 13:18:32.791334 | orchestrator | Sunday 23 March 2025 13:18:32 +0000 (0:00:00.143) 0:00:14.616 ********** 2025-03-23 13:18:32.941141 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:18:32.943120 | orchestrator | 2025-03-23 13:18:32.944294 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-23 13:18:32.944991 | orchestrator | Sunday 23 March 2025 13:18:32 +0000 (0:00:00.157) 0:00:14.774 ********** 2025-03-23 13:18:33.231833 | orchestrator | changed: [testbed-node-3] => { 2025-03-23 13:18:33.232121 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-23 13:18:33.232636 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:18:33.233812 | orchestrator |  "sdb": { 2025-03-23 13:18:33.234507 | orchestrator |  "osd_lvm_uuid": "8229b7a0-df8d-5815-8245-22e3d24081aa" 2025-03-23 13:18:33.235109 | orchestrator |  }, 2025-03-23 13:18:33.235927 | orchestrator |  "sdc": { 2025-03-23 13:18:33.237733 | orchestrator |  "osd_lvm_uuid": "0ab6ed36-da2c-5faf-8aed-224e80357d25" 2025-03-23 13:18:33.238012 | orchestrator |  } 2025-03-23 13:18:33.238088 | orchestrator |  }, 2025-03-23 13:18:33.238498 | orchestrator |  "lvm_volumes": [ 2025-03-23 13:18:33.238948 | orchestrator |  { 2025-03-23 13:18:33.240384 | orchestrator |  "data": "osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa", 2025-03-23 13:18:33.240416 | orchestrator |  "data_vg": "ceph-8229b7a0-df8d-5815-8245-22e3d24081aa" 2025-03-23 13:18:33.240621 | orchestrator |  }, 2025-03-23 13:18:33.243382 | orchestrator |  { 2025-03-23 13:18:33.243664 | orchestrator |  "data": "osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25", 2025-03-23 13:18:33.243689 | orchestrator |  "data_vg": "ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25" 2025-03-23 13:18:33.243705 | orchestrator |  } 2025-03-23 13:18:33.243720 | orchestrator |  ] 2025-03-23 13:18:33.243736 | orchestrator |  } 2025-03-23 13:18:33.243751 | orchestrator | } 2025-03-23 13:18:33.243770 | orchestrator | 2025-03-23 13:18:33.243791 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-23 13:18:33.243929 | orchestrator | Sunday 23 March 2025 13:18:33 +0000 (0:00:00.289) 0:00:15.063 ********** 2025-03-23 13:18:35.896519 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-23 13:18:35.897814 | orchestrator | 2025-03-23 13:18:35.898370 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-23 13:18:35.899666 | orchestrator | 2025-03-23 13:18:35.900938 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:18:35.901992 | orchestrator | Sunday 23 March 2025 13:18:35 +0000 (0:00:02.661) 0:00:17.725 ********** 2025-03-23 13:18:36.193090 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-23 13:18:36.193383 | orchestrator | 2025-03-23 13:18:36.193947 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:18:36.194339 | orchestrator | Sunday 23 March 2025 13:18:36 +0000 (0:00:00.298) 0:00:18.024 ********** 2025-03-23 13:18:36.505837 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:36.509245 | orchestrator | 2025-03-23 13:18:36.509674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:36.510309 | orchestrator | Sunday 23 March 2025 13:18:36 +0000 (0:00:00.313) 0:00:18.338 ********** 2025-03-23 13:18:36.961333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-23 13:18:36.964599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-23 13:18:36.965026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-23 13:18:36.965942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-23 13:18:36.966416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-23 13:18:36.966442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-23 13:18:36.966462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-23 13:18:36.967721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-23 13:18:36.967887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-23 13:18:36.967917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-23 13:18:36.970604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-23 13:18:36.971165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-23 13:18:36.971223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-23 13:18:36.972372 | orchestrator | 2025-03-23 13:18:36.974157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:36.974644 | orchestrator | Sunday 23 March 2025 13:18:36 +0000 (0:00:00.456) 0:00:18.794 ********** 2025-03-23 13:18:37.183557 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:37.184044 | orchestrator | 2025-03-23 13:18:37.184089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:37.184440 | orchestrator | Sunday 23 March 2025 13:18:37 +0000 (0:00:00.220) 0:00:19.015 ********** 2025-03-23 13:18:37.434348 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:37.435919 | orchestrator | 2025-03-23 13:18:37.436151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:37.436577 | orchestrator | Sunday 23 March 2025 13:18:37 +0000 (0:00:00.250) 0:00:19.266 ********** 2025-03-23 13:18:37.856421 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:37.861370 | orchestrator | 2025-03-23 13:18:38.531047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:38.531164 | orchestrator | Sunday 23 March 2025 13:18:37 +0000 (0:00:00.422) 0:00:19.688 ********** 2025-03-23 13:18:38.531278 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:38.531555 | orchestrator | 2025-03-23 13:18:38.532298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:38.534439 | orchestrator | Sunday 23 March 2025 13:18:38 +0000 (0:00:00.674) 0:00:20.363 ********** 2025-03-23 13:18:38.776733 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:38.781027 | orchestrator | 2025-03-23 13:18:38.781894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:38.783352 | orchestrator | Sunday 23 March 2025 13:18:38 +0000 (0:00:00.245) 0:00:20.608 ********** 2025-03-23 13:18:39.006828 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:39.007828 | orchestrator | 2025-03-23 13:18:39.011684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:39.219802 | orchestrator | Sunday 23 March 2025 13:18:39 +0000 (0:00:00.230) 0:00:20.839 ********** 2025-03-23 13:18:39.219942 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:39.220579 | orchestrator | 2025-03-23 13:18:39.221035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:39.222014 | orchestrator | Sunday 23 March 2025 13:18:39 +0000 (0:00:00.213) 0:00:21.052 ********** 2025-03-23 13:18:39.484076 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:39.484539 | orchestrator | 2025-03-23 13:18:39.487885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:39.488016 | orchestrator | Sunday 23 March 2025 13:18:39 +0000 (0:00:00.264) 0:00:21.317 ********** 2025-03-23 13:18:39.976013 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74) 2025-03-23 13:18:39.979011 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74) 2025-03-23 13:18:39.980807 | orchestrator | 2025-03-23 13:18:39.981342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:39.982284 | orchestrator | Sunday 23 March 2025 13:18:39 +0000 (0:00:00.490) 0:00:21.807 ********** 2025-03-23 13:18:40.409357 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6) 2025-03-23 13:18:40.417800 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6) 2025-03-23 13:18:40.419631 | orchestrator | 2025-03-23 13:18:40.420046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:40.420499 | orchestrator | Sunday 23 March 2025 13:18:40 +0000 (0:00:00.435) 0:00:22.243 ********** 2025-03-23 13:18:40.890140 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd) 2025-03-23 13:18:40.890896 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd) 2025-03-23 13:18:40.892198 | orchestrator | 2025-03-23 13:18:40.894985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:40.898221 | orchestrator | Sunday 23 March 2025 13:18:40 +0000 (0:00:00.474) 0:00:22.717 ********** 2025-03-23 13:18:41.641581 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5) 2025-03-23 13:18:41.644140 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5) 2025-03-23 13:18:41.644830 | orchestrator | 2025-03-23 13:18:41.644874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:41.645517 | orchestrator | Sunday 23 March 2025 13:18:41 +0000 (0:00:00.756) 0:00:23.474 ********** 2025-03-23 13:18:42.377150 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:18:42.378834 | orchestrator | 2025-03-23 13:18:42.379563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:42.382749 | orchestrator | Sunday 23 March 2025 13:18:42 +0000 (0:00:00.735) 0:00:24.210 ********** 2025-03-23 13:18:42.816683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-23 13:18:42.818350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-23 13:18:42.822109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-23 13:18:42.823598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-23 13:18:42.824377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-23 13:18:42.824747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-23 13:18:42.825401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-23 13:18:42.825870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-23 13:18:42.826595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-23 13:18:42.827313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-23 13:18:42.827869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-23 13:18:42.828281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-23 13:18:42.828884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-23 13:18:42.829609 | orchestrator | 2025-03-23 13:18:42.830126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:42.830532 | orchestrator | Sunday 23 March 2025 13:18:42 +0000 (0:00:00.437) 0:00:24.647 ********** 2025-03-23 13:18:43.036223 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:43.037413 | orchestrator | 2025-03-23 13:18:43.038883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:43.041915 | orchestrator | Sunday 23 March 2025 13:18:43 +0000 (0:00:00.220) 0:00:24.868 ********** 2025-03-23 13:18:43.288156 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:43.288688 | orchestrator | 2025-03-23 13:18:43.290432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:43.292580 | orchestrator | Sunday 23 March 2025 13:18:43 +0000 (0:00:00.253) 0:00:25.121 ********** 2025-03-23 13:18:43.515372 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:43.515888 | orchestrator | 2025-03-23 13:18:43.516690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:43.518225 | orchestrator | Sunday 23 March 2025 13:18:43 +0000 (0:00:00.227) 0:00:25.348 ********** 2025-03-23 13:18:43.734116 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:43.735140 | orchestrator | 2025-03-23 13:18:43.739996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:43.740544 | orchestrator | Sunday 23 March 2025 13:18:43 +0000 (0:00:00.217) 0:00:25.565 ********** 2025-03-23 13:18:43.953815 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:43.955602 | orchestrator | 2025-03-23 13:18:43.957793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:43.959048 | orchestrator | Sunday 23 March 2025 13:18:43 +0000 (0:00:00.218) 0:00:25.784 ********** 2025-03-23 13:18:44.150460 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:44.150885 | orchestrator | 2025-03-23 13:18:44.151384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:44.154113 | orchestrator | Sunday 23 March 2025 13:18:44 +0000 (0:00:00.197) 0:00:25.982 ********** 2025-03-23 13:18:44.355060 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:44.355608 | orchestrator | 2025-03-23 13:18:44.358214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:44.359277 | orchestrator | Sunday 23 March 2025 13:18:44 +0000 (0:00:00.203) 0:00:26.186 ********** 2025-03-23 13:18:44.595818 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:44.596688 | orchestrator | 2025-03-23 13:18:44.597547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:44.598574 | orchestrator | Sunday 23 March 2025 13:18:44 +0000 (0:00:00.241) 0:00:26.427 ********** 2025-03-23 13:18:45.708732 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-23 13:18:45.709237 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-23 13:18:45.711960 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-23 13:18:45.712286 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-23 13:18:45.712316 | orchestrator | 2025-03-23 13:18:45.712786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:45.713425 | orchestrator | Sunday 23 March 2025 13:18:45 +0000 (0:00:01.112) 0:00:27.540 ********** 2025-03-23 13:18:45.945260 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:45.946299 | orchestrator | 2025-03-23 13:18:45.946326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:45.947385 | orchestrator | Sunday 23 March 2025 13:18:45 +0000 (0:00:00.235) 0:00:27.775 ********** 2025-03-23 13:18:46.151975 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:46.153259 | orchestrator | 2025-03-23 13:18:46.153668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:46.154730 | orchestrator | Sunday 23 March 2025 13:18:46 +0000 (0:00:00.206) 0:00:27.982 ********** 2025-03-23 13:18:46.364003 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:46.365897 | orchestrator | 2025-03-23 13:18:46.366777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:46.367718 | orchestrator | Sunday 23 March 2025 13:18:46 +0000 (0:00:00.214) 0:00:28.196 ********** 2025-03-23 13:18:46.589969 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:46.590173 | orchestrator | 2025-03-23 13:18:46.590245 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-23 13:18:46.591067 | orchestrator | Sunday 23 March 2025 13:18:46 +0000 (0:00:00.223) 0:00:28.420 ********** 2025-03-23 13:18:46.814197 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-23 13:18:46.815309 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-23 13:18:46.815789 | orchestrator | 2025-03-23 13:18:46.817471 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-23 13:18:46.818059 | orchestrator | Sunday 23 March 2025 13:18:46 +0000 (0:00:00.225) 0:00:28.646 ********** 2025-03-23 13:18:46.969836 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:46.972532 | orchestrator | 2025-03-23 13:18:46.973231 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-23 13:18:46.973246 | orchestrator | Sunday 23 March 2025 13:18:46 +0000 (0:00:00.154) 0:00:28.801 ********** 2025-03-23 13:18:47.120496 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:47.121160 | orchestrator | 2025-03-23 13:18:47.122608 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-23 13:18:47.123168 | orchestrator | Sunday 23 March 2025 13:18:47 +0000 (0:00:00.150) 0:00:28.951 ********** 2025-03-23 13:18:47.346681 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:47.347666 | orchestrator | 2025-03-23 13:18:47.351748 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-23 13:18:47.353005 | orchestrator | Sunday 23 March 2025 13:18:47 +0000 (0:00:00.227) 0:00:29.179 ********** 2025-03-23 13:18:47.490520 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:47.492028 | orchestrator | 2025-03-23 13:18:47.495073 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-23 13:18:47.691623 | orchestrator | Sunday 23 March 2025 13:18:47 +0000 (0:00:00.144) 0:00:29.323 ********** 2025-03-23 13:18:47.691690 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}}) 2025-03-23 13:18:47.692601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}}) 2025-03-23 13:18:47.694124 | orchestrator | 2025-03-23 13:18:47.695209 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-23 13:18:47.696224 | orchestrator | Sunday 23 March 2025 13:18:47 +0000 (0:00:00.200) 0:00:29.524 ********** 2025-03-23 13:18:48.086336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}})  2025-03-23 13:18:48.088531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}})  2025-03-23 13:18:48.089410 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:48.090677 | orchestrator | 2025-03-23 13:18:48.092162 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-23 13:18:48.093293 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.393) 0:00:29.917 ********** 2025-03-23 13:18:48.268231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}})  2025-03-23 13:18:48.269323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}})  2025-03-23 13:18:48.270176 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:48.271203 | orchestrator | 2025-03-23 13:18:48.272347 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-23 13:18:48.273135 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.183) 0:00:30.101 ********** 2025-03-23 13:18:48.438449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}})  2025-03-23 13:18:48.439994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}})  2025-03-23 13:18:48.440556 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:48.442247 | orchestrator | 2025-03-23 13:18:48.445321 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-23 13:18:48.446643 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.170) 0:00:30.272 ********** 2025-03-23 13:18:48.578922 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:48.579384 | orchestrator | 2025-03-23 13:18:48.580251 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-23 13:18:48.580891 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.140) 0:00:30.412 ********** 2025-03-23 13:18:48.725516 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:18:48.726234 | orchestrator | 2025-03-23 13:18:48.727179 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-23 13:18:48.729695 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.146) 0:00:30.558 ********** 2025-03-23 13:18:48.877094 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:48.878382 | orchestrator | 2025-03-23 13:18:48.878888 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-23 13:18:48.879989 | orchestrator | Sunday 23 March 2025 13:18:48 +0000 (0:00:00.151) 0:00:30.710 ********** 2025-03-23 13:18:49.026449 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:49.027103 | orchestrator | 2025-03-23 13:18:49.027788 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-23 13:18:49.028430 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.149) 0:00:30.859 ********** 2025-03-23 13:18:49.170792 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:49.171063 | orchestrator | 2025-03-23 13:18:49.171526 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-23 13:18:49.172311 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.143) 0:00:31.002 ********** 2025-03-23 13:18:49.316099 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:18:49.317550 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:18:49.317673 | orchestrator |  "sdb": { 2025-03-23 13:18:49.319166 | orchestrator |  "osd_lvm_uuid": "5102d35b-39ce-5a2f-80bc-7bd1ce5c8233" 2025-03-23 13:18:49.319848 | orchestrator |  }, 2025-03-23 13:18:49.321848 | orchestrator |  "sdc": { 2025-03-23 13:18:49.323064 | orchestrator |  "osd_lvm_uuid": "cbe43cef-cccc-569d-93a4-8e7e2e8a94cb" 2025-03-23 13:18:49.323096 | orchestrator |  } 2025-03-23 13:18:49.323336 | orchestrator |  } 2025-03-23 13:18:49.323878 | orchestrator | } 2025-03-23 13:18:49.324629 | orchestrator | 2025-03-23 13:18:49.324941 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-23 13:18:49.325397 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.146) 0:00:31.149 ********** 2025-03-23 13:18:49.466004 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:49.467688 | orchestrator | 2025-03-23 13:18:49.469574 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-23 13:18:49.612735 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.148) 0:00:31.298 ********** 2025-03-23 13:18:49.612794 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:49.613003 | orchestrator | 2025-03-23 13:18:49.614826 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-23 13:18:49.617158 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.147) 0:00:31.445 ********** 2025-03-23 13:18:49.759533 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:18:49.759896 | orchestrator | 2025-03-23 13:18:49.763653 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-23 13:18:50.305696 | orchestrator | Sunday 23 March 2025 13:18:49 +0000 (0:00:00.146) 0:00:31.591 ********** 2025-03-23 13:18:50.305800 | orchestrator | changed: [testbed-node-4] => { 2025-03-23 13:18:50.306986 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-23 13:18:50.307966 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:18:50.313260 | orchestrator |  "sdb": { 2025-03-23 13:18:50.314385 | orchestrator |  "osd_lvm_uuid": "5102d35b-39ce-5a2f-80bc-7bd1ce5c8233" 2025-03-23 13:18:50.315474 | orchestrator |  }, 2025-03-23 13:18:50.317174 | orchestrator |  "sdc": { 2025-03-23 13:18:50.317815 | orchestrator |  "osd_lvm_uuid": "cbe43cef-cccc-569d-93a4-8e7e2e8a94cb" 2025-03-23 13:18:50.319660 | orchestrator |  } 2025-03-23 13:18:50.320449 | orchestrator |  }, 2025-03-23 13:18:50.321218 | orchestrator |  "lvm_volumes": [ 2025-03-23 13:18:50.323281 | orchestrator |  { 2025-03-23 13:18:50.323831 | orchestrator |  "data": "osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233", 2025-03-23 13:18:50.325700 | orchestrator |  "data_vg": "ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233" 2025-03-23 13:18:50.326110 | orchestrator |  }, 2025-03-23 13:18:50.326763 | orchestrator |  { 2025-03-23 13:18:50.327561 | orchestrator |  "data": "osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb", 2025-03-23 13:18:50.329388 | orchestrator |  "data_vg": "ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb" 2025-03-23 13:18:50.329870 | orchestrator |  } 2025-03-23 13:18:50.331478 | orchestrator |  ] 2025-03-23 13:18:50.332485 | orchestrator |  } 2025-03-23 13:18:50.333340 | orchestrator | } 2025-03-23 13:18:50.334392 | orchestrator | 2025-03-23 13:18:50.334643 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-23 13:18:50.335893 | orchestrator | Sunday 23 March 2025 13:18:50 +0000 (0:00:00.546) 0:00:32.138 ********** 2025-03-23 13:18:51.800310 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-23 13:18:51.800983 | orchestrator | 2025-03-23 13:18:51.802339 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-23 13:18:51.803473 | orchestrator | 2025-03-23 13:18:51.804484 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:18:51.804840 | orchestrator | Sunday 23 March 2025 13:18:51 +0000 (0:00:01.493) 0:00:33.631 ********** 2025-03-23 13:18:52.040793 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-23 13:18:52.041560 | orchestrator | 2025-03-23 13:18:52.042437 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:18:52.043468 | orchestrator | Sunday 23 March 2025 13:18:52 +0000 (0:00:00.241) 0:00:33.873 ********** 2025-03-23 13:18:52.673272 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:18:52.673838 | orchestrator | 2025-03-23 13:18:52.676730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:53.098470 | orchestrator | Sunday 23 March 2025 13:18:52 +0000 (0:00:00.631) 0:00:34.505 ********** 2025-03-23 13:18:53.098558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-23 13:18:53.098954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-23 13:18:53.099595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-23 13:18:53.101272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-23 13:18:53.102151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-23 13:18:53.103086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-23 13:18:53.103714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-23 13:18:53.104350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-23 13:18:53.105286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-23 13:18:53.105839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-23 13:18:53.106236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-23 13:18:53.106653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-23 13:18:53.107374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-23 13:18:53.108147 | orchestrator | 2025-03-23 13:18:53.108526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:53.108970 | orchestrator | Sunday 23 March 2025 13:18:53 +0000 (0:00:00.424) 0:00:34.929 ********** 2025-03-23 13:18:53.359062 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:53.361141 | orchestrator | 2025-03-23 13:18:53.363857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:53.586563 | orchestrator | Sunday 23 March 2025 13:18:53 +0000 (0:00:00.262) 0:00:35.192 ********** 2025-03-23 13:18:53.586624 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:53.588419 | orchestrator | 2025-03-23 13:18:53.589627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:53.589838 | orchestrator | Sunday 23 March 2025 13:18:53 +0000 (0:00:00.226) 0:00:35.418 ********** 2025-03-23 13:18:53.793699 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:53.795247 | orchestrator | 2025-03-23 13:18:53.796242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:53.797328 | orchestrator | Sunday 23 March 2025 13:18:53 +0000 (0:00:00.207) 0:00:35.626 ********** 2025-03-23 13:18:54.014908 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:54.016110 | orchestrator | 2025-03-23 13:18:54.017356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:54.018111 | orchestrator | Sunday 23 March 2025 13:18:54 +0000 (0:00:00.221) 0:00:35.847 ********** 2025-03-23 13:18:54.223693 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:54.224580 | orchestrator | 2025-03-23 13:18:54.225714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:54.229543 | orchestrator | Sunday 23 March 2025 13:18:54 +0000 (0:00:00.207) 0:00:36.054 ********** 2025-03-23 13:18:54.469853 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:54.470136 | orchestrator | 2025-03-23 13:18:54.471779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:54.472430 | orchestrator | Sunday 23 March 2025 13:18:54 +0000 (0:00:00.248) 0:00:36.302 ********** 2025-03-23 13:18:54.681616 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:54.682247 | orchestrator | 2025-03-23 13:18:54.683256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:54.683881 | orchestrator | Sunday 23 March 2025 13:18:54 +0000 (0:00:00.212) 0:00:36.515 ********** 2025-03-23 13:18:54.895755 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:54.896464 | orchestrator | 2025-03-23 13:18:54.896503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:54.897653 | orchestrator | Sunday 23 March 2025 13:18:54 +0000 (0:00:00.214) 0:00:36.729 ********** 2025-03-23 13:18:55.794317 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81) 2025-03-23 13:18:55.795695 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81) 2025-03-23 13:18:55.797140 | orchestrator | 2025-03-23 13:18:55.797952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:55.799135 | orchestrator | Sunday 23 March 2025 13:18:55 +0000 (0:00:00.896) 0:00:37.625 ********** 2025-03-23 13:18:56.247867 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d) 2025-03-23 13:18:56.248590 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d) 2025-03-23 13:18:56.249480 | orchestrator | 2025-03-23 13:18:56.250522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:56.251255 | orchestrator | Sunday 23 March 2025 13:18:56 +0000 (0:00:00.454) 0:00:38.080 ********** 2025-03-23 13:18:56.792312 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9) 2025-03-23 13:18:56.793012 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9) 2025-03-23 13:18:56.795442 | orchestrator | 2025-03-23 13:18:56.796670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:56.796701 | orchestrator | Sunday 23 March 2025 13:18:56 +0000 (0:00:00.542) 0:00:38.623 ********** 2025-03-23 13:18:57.328452 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5) 2025-03-23 13:18:57.328610 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5) 2025-03-23 13:18:57.329421 | orchestrator | 2025-03-23 13:18:57.330569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:18:57.333041 | orchestrator | Sunday 23 March 2025 13:18:57 +0000 (0:00:00.537) 0:00:39.160 ********** 2025-03-23 13:18:57.683109 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:18:57.684016 | orchestrator | 2025-03-23 13:18:57.686436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:58.133604 | orchestrator | Sunday 23 March 2025 13:18:57 +0000 (0:00:00.349) 0:00:39.510 ********** 2025-03-23 13:18:58.133735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-23 13:18:58.135277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-23 13:18:58.135512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-23 13:18:58.136861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-23 13:18:58.140513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-23 13:18:58.140756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-23 13:18:58.140786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-23 13:18:58.140800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-23 13:18:58.140820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-23 13:18:58.141729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-23 13:18:58.142350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-23 13:18:58.142941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-23 13:18:58.143543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-23 13:18:58.143760 | orchestrator | 2025-03-23 13:18:58.144224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:58.144793 | orchestrator | Sunday 23 March 2025 13:18:58 +0000 (0:00:00.453) 0:00:39.964 ********** 2025-03-23 13:18:58.368938 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:58.369568 | orchestrator | 2025-03-23 13:18:58.371312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:58.372341 | orchestrator | Sunday 23 March 2025 13:18:58 +0000 (0:00:00.237) 0:00:40.201 ********** 2025-03-23 13:18:58.589272 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:58.590002 | orchestrator | 2025-03-23 13:18:58.591907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:58.592408 | orchestrator | Sunday 23 March 2025 13:18:58 +0000 (0:00:00.220) 0:00:40.421 ********** 2025-03-23 13:18:58.806009 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:58.806176 | orchestrator | 2025-03-23 13:18:58.806230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:58.806602 | orchestrator | Sunday 23 March 2025 13:18:58 +0000 (0:00:00.216) 0:00:40.638 ********** 2025-03-23 13:18:59.470909 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:59.472159 | orchestrator | 2025-03-23 13:18:59.474255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:59.474372 | orchestrator | Sunday 23 March 2025 13:18:59 +0000 (0:00:00.664) 0:00:41.302 ********** 2025-03-23 13:18:59.685584 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:59.688508 | orchestrator | 2025-03-23 13:18:59.689129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:59.689966 | orchestrator | Sunday 23 March 2025 13:18:59 +0000 (0:00:00.213) 0:00:41.515 ********** 2025-03-23 13:18:59.940422 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:18:59.941651 | orchestrator | 2025-03-23 13:18:59.941994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:18:59.942731 | orchestrator | Sunday 23 March 2025 13:18:59 +0000 (0:00:00.257) 0:00:41.773 ********** 2025-03-23 13:19:00.182440 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:00.183186 | orchestrator | 2025-03-23 13:19:00.183881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:00.183913 | orchestrator | Sunday 23 March 2025 13:19:00 +0000 (0:00:00.238) 0:00:42.011 ********** 2025-03-23 13:19:00.413350 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:00.414901 | orchestrator | 2025-03-23 13:19:00.415684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:00.416563 | orchestrator | Sunday 23 March 2025 13:19:00 +0000 (0:00:00.233) 0:00:42.245 ********** 2025-03-23 13:19:01.178461 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-23 13:19:01.179591 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-23 13:19:01.181078 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-23 13:19:01.182091 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-23 13:19:01.184858 | orchestrator | 2025-03-23 13:19:01.393424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:01.393522 | orchestrator | Sunday 23 March 2025 13:19:01 +0000 (0:00:00.766) 0:00:43.011 ********** 2025-03-23 13:19:01.393550 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:01.394315 | orchestrator | 2025-03-23 13:19:01.395712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:01.396719 | orchestrator | Sunday 23 March 2025 13:19:01 +0000 (0:00:00.207) 0:00:43.219 ********** 2025-03-23 13:19:01.591575 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:01.592415 | orchestrator | 2025-03-23 13:19:01.593971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:01.594890 | orchestrator | Sunday 23 March 2025 13:19:01 +0000 (0:00:00.204) 0:00:43.424 ********** 2025-03-23 13:19:01.821532 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:01.823484 | orchestrator | 2025-03-23 13:19:01.824183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:01.825482 | orchestrator | Sunday 23 March 2025 13:19:01 +0000 (0:00:00.227) 0:00:43.652 ********** 2025-03-23 13:19:02.041750 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:02.043371 | orchestrator | 2025-03-23 13:19:02.043937 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-23 13:19:02.045943 | orchestrator | Sunday 23 March 2025 13:19:02 +0000 (0:00:00.222) 0:00:43.874 ********** 2025-03-23 13:19:02.467862 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-23 13:19:02.468848 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-23 13:19:02.469387 | orchestrator | 2025-03-23 13:19:02.470085 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-23 13:19:02.470470 | orchestrator | Sunday 23 March 2025 13:19:02 +0000 (0:00:00.425) 0:00:44.300 ********** 2025-03-23 13:19:02.616809 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:02.617318 | orchestrator | 2025-03-23 13:19:02.618613 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-23 13:19:02.620919 | orchestrator | Sunday 23 March 2025 13:19:02 +0000 (0:00:00.149) 0:00:44.450 ********** 2025-03-23 13:19:02.778160 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:02.778320 | orchestrator | 2025-03-23 13:19:02.779265 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-23 13:19:02.779741 | orchestrator | Sunday 23 March 2025 13:19:02 +0000 (0:00:00.161) 0:00:44.611 ********** 2025-03-23 13:19:02.924184 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:02.927259 | orchestrator | 2025-03-23 13:19:02.928182 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-23 13:19:02.929570 | orchestrator | Sunday 23 March 2025 13:19:02 +0000 (0:00:00.141) 0:00:44.752 ********** 2025-03-23 13:19:03.056363 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:19:03.057603 | orchestrator | 2025-03-23 13:19:03.060989 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-23 13:19:03.233041 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.137) 0:00:44.889 ********** 2025-03-23 13:19:03.233162 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9205bfbb-9f4f-501b-85a3-60f418fff160'}}) 2025-03-23 13:19:03.233803 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a8506d3-5e74-5dde-8df3-17f522800900'}}) 2025-03-23 13:19:03.234884 | orchestrator | 2025-03-23 13:19:03.236050 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-23 13:19:03.237416 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.175) 0:00:45.065 ********** 2025-03-23 13:19:03.383707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9205bfbb-9f4f-501b-85a3-60f418fff160'}})  2025-03-23 13:19:03.385656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a8506d3-5e74-5dde-8df3-17f522800900'}})  2025-03-23 13:19:03.386707 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:03.390703 | orchestrator | 2025-03-23 13:19:03.391378 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-23 13:19:03.392823 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.148) 0:00:45.213 ********** 2025-03-23 13:19:03.572391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9205bfbb-9f4f-501b-85a3-60f418fff160'}})  2025-03-23 13:19:03.572781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a8506d3-5e74-5dde-8df3-17f522800900'}})  2025-03-23 13:19:03.573674 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:03.574691 | orchestrator | 2025-03-23 13:19:03.575372 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-23 13:19:03.576316 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.190) 0:00:45.404 ********** 2025-03-23 13:19:03.765853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9205bfbb-9f4f-501b-85a3-60f418fff160'}})  2025-03-23 13:19:03.766123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a8506d3-5e74-5dde-8df3-17f522800900'}})  2025-03-23 13:19:03.767011 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:03.767867 | orchestrator | 2025-03-23 13:19:03.769832 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-23 13:19:03.935433 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.194) 0:00:45.598 ********** 2025-03-23 13:19:03.935521 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:19:03.936318 | orchestrator | 2025-03-23 13:19:03.937006 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-23 13:19:03.938262 | orchestrator | Sunday 23 March 2025 13:19:03 +0000 (0:00:00.167) 0:00:45.766 ********** 2025-03-23 13:19:04.121263 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:19:04.122070 | orchestrator | 2025-03-23 13:19:04.122430 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-23 13:19:04.123491 | orchestrator | Sunday 23 March 2025 13:19:04 +0000 (0:00:00.187) 0:00:45.954 ********** 2025-03-23 13:19:04.286518 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:04.288445 | orchestrator | 2025-03-23 13:19:04.288893 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-23 13:19:04.288923 | orchestrator | Sunday 23 March 2025 13:19:04 +0000 (0:00:00.165) 0:00:46.119 ********** 2025-03-23 13:19:04.684187 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:04.684367 | orchestrator | 2025-03-23 13:19:04.684392 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-23 13:19:04.684842 | orchestrator | Sunday 23 March 2025 13:19:04 +0000 (0:00:00.397) 0:00:46.517 ********** 2025-03-23 13:19:04.835177 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:04.835956 | orchestrator | 2025-03-23 13:19:04.836263 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-23 13:19:04.837068 | orchestrator | Sunday 23 March 2025 13:19:04 +0000 (0:00:00.150) 0:00:46.668 ********** 2025-03-23 13:19:05.033734 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:19:05.034854 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:19:05.034927 | orchestrator |  "sdb": { 2025-03-23 13:19:05.035649 | orchestrator |  "osd_lvm_uuid": "9205bfbb-9f4f-501b-85a3-60f418fff160" 2025-03-23 13:19:05.036405 | orchestrator |  }, 2025-03-23 13:19:05.037178 | orchestrator |  "sdc": { 2025-03-23 13:19:05.037405 | orchestrator |  "osd_lvm_uuid": "5a8506d3-5e74-5dde-8df3-17f522800900" 2025-03-23 13:19:05.038070 | orchestrator |  } 2025-03-23 13:19:05.039586 | orchestrator |  } 2025-03-23 13:19:05.041031 | orchestrator | } 2025-03-23 13:19:05.041061 | orchestrator | 2025-03-23 13:19:05.041148 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-23 13:19:05.044782 | orchestrator | Sunday 23 March 2025 13:19:05 +0000 (0:00:00.198) 0:00:46.866 ********** 2025-03-23 13:19:05.203576 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:05.349032 | orchestrator | 2025-03-23 13:19:05.349073 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-23 13:19:05.349088 | orchestrator | Sunday 23 March 2025 13:19:05 +0000 (0:00:00.161) 0:00:47.027 ********** 2025-03-23 13:19:05.349109 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:05.350336 | orchestrator | 2025-03-23 13:19:05.350368 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-23 13:19:05.353154 | orchestrator | Sunday 23 March 2025 13:19:05 +0000 (0:00:00.153) 0:00:47.181 ********** 2025-03-23 13:19:05.501860 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:19:05.502283 | orchestrator | 2025-03-23 13:19:05.502330 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-23 13:19:05.502404 | orchestrator | Sunday 23 March 2025 13:19:05 +0000 (0:00:00.150) 0:00:47.332 ********** 2025-03-23 13:19:05.794103 | orchestrator | changed: [testbed-node-5] => { 2025-03-23 13:19:05.794255 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-23 13:19:05.794276 | orchestrator |  "ceph_osd_devices": { 2025-03-23 13:19:05.794295 | orchestrator |  "sdb": { 2025-03-23 13:19:05.794531 | orchestrator |  "osd_lvm_uuid": "9205bfbb-9f4f-501b-85a3-60f418fff160" 2025-03-23 13:19:05.794556 | orchestrator |  }, 2025-03-23 13:19:05.794576 | orchestrator |  "sdc": { 2025-03-23 13:19:05.795173 | orchestrator |  "osd_lvm_uuid": "5a8506d3-5e74-5dde-8df3-17f522800900" 2025-03-23 13:19:05.797231 | orchestrator |  } 2025-03-23 13:19:05.797498 | orchestrator |  }, 2025-03-23 13:19:05.797525 | orchestrator |  "lvm_volumes": [ 2025-03-23 13:19:05.797541 | orchestrator |  { 2025-03-23 13:19:05.797556 | orchestrator |  "data": "osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160", 2025-03-23 13:19:05.797572 | orchestrator |  "data_vg": "ceph-9205bfbb-9f4f-501b-85a3-60f418fff160" 2025-03-23 13:19:05.797587 | orchestrator |  }, 2025-03-23 13:19:05.797607 | orchestrator |  { 2025-03-23 13:19:05.798064 | orchestrator |  "data": "osd-block-5a8506d3-5e74-5dde-8df3-17f522800900", 2025-03-23 13:19:05.798487 | orchestrator |  "data_vg": "ceph-5a8506d3-5e74-5dde-8df3-17f522800900" 2025-03-23 13:19:05.798723 | orchestrator |  } 2025-03-23 13:19:05.799060 | orchestrator |  ] 2025-03-23 13:19:05.799368 | orchestrator |  } 2025-03-23 13:19:05.799729 | orchestrator | } 2025-03-23 13:19:05.800078 | orchestrator | 2025-03-23 13:19:05.800465 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-23 13:19:05.800788 | orchestrator | Sunday 23 March 2025 13:19:05 +0000 (0:00:00.291) 0:00:47.624 ********** 2025-03-23 13:19:07.216742 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-23 13:19:07.216922 | orchestrator | 2025-03-23 13:19:07.217329 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:19:07.217603 | orchestrator | 2025-03-23 13:19:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:19:07.217998 | orchestrator | 2025-03-23 13:19:07 | INFO  | Please wait and do not abort execution. 2025-03-23 13:19:07.219329 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-23 13:19:07.220099 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-23 13:19:07.221245 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-23 13:19:07.223122 | orchestrator | 2025-03-23 13:19:07.223606 | orchestrator | 2025-03-23 13:19:07.224462 | orchestrator | 2025-03-23 13:19:07.224488 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:19:07.225046 | orchestrator | Sunday 23 March 2025 13:19:07 +0000 (0:00:01.423) 0:00:49.047 ********** 2025-03-23 13:19:07.225471 | orchestrator | =============================================================================== 2025-03-23 13:19:07.225890 | orchestrator | Write configuration file ------------------------------------------------ 5.58s 2025-03-23 13:19:07.226413 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2025-03-23 13:19:07.226762 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2025-03-23 13:19:07.227165 | orchestrator | Get initial list of available block devices ----------------------------- 1.21s 2025-03-23 13:19:07.228132 | orchestrator | Print configuration data ------------------------------------------------ 1.13s 2025-03-23 13:19:07.228397 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-03-23 13:19:07.228912 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-03-23 13:19:07.228939 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-03-23 13:19:07.229561 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.88s 2025-03-23 13:19:07.229812 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-03-23 13:19:07.230235 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2025-03-23 13:19:07.230544 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-03-23 13:19:07.231368 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.76s 2025-03-23 13:19:07.231394 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-03-23 13:19:07.231556 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-03-23 13:19:07.231861 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.73s 2025-03-23 13:19:07.232221 | orchestrator | Set WAL devices config data --------------------------------------------- 0.72s 2025-03-23 13:19:07.233049 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.68s 2025-03-23 13:19:19.480718 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-03-23 13:19:19.480789 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.67s 2025-03-23 13:19:19.480816 | orchestrator | 2025-03-23 13:19:19 | INFO  | Task 8002579a-c139-4eb6-a468-f3ad67fa1b8d is running in background. Output coming soon. 2025-03-23 13:19:46.900117 | orchestrator | 2025-03-23 13:19:37 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-03-23 13:19:48.634733 | orchestrator | 2025-03-23 13:19:37 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-03-23 13:19:48.634832 | orchestrator | 2025-03-23 13:19:37 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-03-23 13:19:48.634851 | orchestrator | 2025-03-23 13:19:38 | INFO  | Handling group overwrites in 99-overwrite 2025-03-23 13:19:48.634879 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group ceph-mds from 50-ceph 2025-03-23 13:19:48.634906 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group ceph-rgw from 50-ceph 2025-03-23 13:19:48.634921 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group netbird:children from 50-infrastruture 2025-03-23 13:19:48.634935 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group storage:children from 50-kolla 2025-03-23 13:19:48.634950 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group frr:children from 60-generic 2025-03-23 13:19:48.634964 | orchestrator | 2025-03-23 13:19:38 | INFO  | Handling group overwrites in 20-roles 2025-03-23 13:19:48.634978 | orchestrator | 2025-03-23 13:19:38 | INFO  | Removing group k3s_node from 50-infrastruture 2025-03-23 13:19:48.634992 | orchestrator | 2025-03-23 13:19:38 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-03-23 13:19:48.635006 | orchestrator | 2025-03-23 13:19:46 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-03-23 13:19:48.635021 | orchestrator | [master 3c1fe08] 2025-03-23-13-19 2025-03-23 13:19:48.635035 | orchestrator | 1 file changed, 42 deletions(-) 2025-03-23 13:19:48.635065 | orchestrator | 2025-03-23 13:19:48 | INFO  | Task e75767fe-9f35-4b32-b350-0d38f780d3e7 (ceph-create-lvm-devices) was prepared for execution. 2025-03-23 13:19:51.804487 | orchestrator | 2025-03-23 13:19:48 | INFO  | It takes a moment until task e75767fe-9f35-4b32-b350-0d38f780d3e7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-23 13:19:51.805339 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:19:52.368107 | orchestrator | 2025-03-23 13:19:52.369156 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-23 13:19:52.370458 | orchestrator | 2025-03-23 13:19:52.370786 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:19:52.375445 | orchestrator | Sunday 23 March 2025 13:19:52 +0000 (0:00:00.486) 0:00:00.486 ********** 2025-03-23 13:19:52.624334 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-23 13:19:52.629430 | orchestrator | 2025-03-23 13:19:52.630128 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:19:52.631009 | orchestrator | Sunday 23 March 2025 13:19:52 +0000 (0:00:00.255) 0:00:00.742 ********** 2025-03-23 13:19:52.902176 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:19:52.903038 | orchestrator | 2025-03-23 13:19:52.904454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:52.906262 | orchestrator | Sunday 23 March 2025 13:19:52 +0000 (0:00:00.279) 0:00:01.021 ********** 2025-03-23 13:19:53.665476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-23 13:19:53.666111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-23 13:19:53.670508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-23 13:19:53.671433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-23 13:19:53.672610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-23 13:19:53.673865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-23 13:19:53.674434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-23 13:19:53.675640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-23 13:19:53.678661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-23 13:19:53.680443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-23 13:19:53.682269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-23 13:19:53.683870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-23 13:19:53.684944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-23 13:19:53.687230 | orchestrator | 2025-03-23 13:19:53.688713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:53.689001 | orchestrator | Sunday 23 March 2025 13:19:53 +0000 (0:00:00.761) 0:00:01.782 ********** 2025-03-23 13:19:53.863526 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:53.864037 | orchestrator | 2025-03-23 13:19:53.864095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:53.864505 | orchestrator | Sunday 23 March 2025 13:19:53 +0000 (0:00:00.199) 0:00:01.982 ********** 2025-03-23 13:19:54.075188 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:54.076804 | orchestrator | 2025-03-23 13:19:54.077793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:54.078860 | orchestrator | Sunday 23 March 2025 13:19:54 +0000 (0:00:00.211) 0:00:02.193 ********** 2025-03-23 13:19:54.284098 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:54.285004 | orchestrator | 2025-03-23 13:19:54.286279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:54.286589 | orchestrator | Sunday 23 March 2025 13:19:54 +0000 (0:00:00.210) 0:00:02.404 ********** 2025-03-23 13:19:54.532021 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:54.532189 | orchestrator | 2025-03-23 13:19:54.532329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:54.532429 | orchestrator | Sunday 23 March 2025 13:19:54 +0000 (0:00:00.248) 0:00:02.652 ********** 2025-03-23 13:19:54.740078 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:54.741086 | orchestrator | 2025-03-23 13:19:54.741566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:54.741603 | orchestrator | Sunday 23 March 2025 13:19:54 +0000 (0:00:00.204) 0:00:02.856 ********** 2025-03-23 13:19:54.944536 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:54.944842 | orchestrator | 2025-03-23 13:19:54.944883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:54.945373 | orchestrator | Sunday 23 March 2025 13:19:54 +0000 (0:00:00.205) 0:00:03.062 ********** 2025-03-23 13:19:55.176481 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:55.177981 | orchestrator | 2025-03-23 13:19:55.178999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:55.179338 | orchestrator | Sunday 23 March 2025 13:19:55 +0000 (0:00:00.233) 0:00:03.295 ********** 2025-03-23 13:19:55.409287 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:55.410280 | orchestrator | 2025-03-23 13:19:55.411357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:55.415032 | orchestrator | Sunday 23 March 2025 13:19:55 +0000 (0:00:00.231) 0:00:03.527 ********** 2025-03-23 13:19:56.065353 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48) 2025-03-23 13:19:56.066344 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48) 2025-03-23 13:19:56.067487 | orchestrator | 2025-03-23 13:19:56.068945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:56.069951 | orchestrator | Sunday 23 March 2025 13:19:56 +0000 (0:00:00.655) 0:00:04.182 ********** 2025-03-23 13:19:56.821538 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754) 2025-03-23 13:19:56.822499 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754) 2025-03-23 13:19:56.824126 | orchestrator | 2025-03-23 13:19:56.824699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:56.825840 | orchestrator | Sunday 23 March 2025 13:19:56 +0000 (0:00:00.755) 0:00:04.938 ********** 2025-03-23 13:19:57.265958 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b) 2025-03-23 13:19:57.267715 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b) 2025-03-23 13:19:57.268471 | orchestrator | 2025-03-23 13:19:57.269996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:57.270976 | orchestrator | Sunday 23 March 2025 13:19:57 +0000 (0:00:00.446) 0:00:05.384 ********** 2025-03-23 13:19:57.740587 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992) 2025-03-23 13:19:57.742122 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992) 2025-03-23 13:19:57.743339 | orchestrator | 2025-03-23 13:19:57.744755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:19:57.746234 | orchestrator | Sunday 23 March 2025 13:19:57 +0000 (0:00:00.472) 0:00:05.857 ********** 2025-03-23 13:19:58.095982 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:19:58.096851 | orchestrator | 2025-03-23 13:19:58.097933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:58.098543 | orchestrator | Sunday 23 March 2025 13:19:58 +0000 (0:00:00.357) 0:00:06.215 ********** 2025-03-23 13:19:58.594881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-23 13:19:58.595387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-23 13:19:58.595655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-23 13:19:58.596350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-23 13:19:58.596644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-23 13:19:58.597456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-23 13:19:58.597844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-23 13:19:58.600274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-23 13:19:58.601338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-23 13:19:58.601364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-23 13:19:58.601378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-23 13:19:58.601393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-23 13:19:58.601412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-23 13:19:58.601509 | orchestrator | 2025-03-23 13:19:58.602874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:58.603288 | orchestrator | Sunday 23 March 2025 13:19:58 +0000 (0:00:00.499) 0:00:06.714 ********** 2025-03-23 13:19:58.809348 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:58.809706 | orchestrator | 2025-03-23 13:19:58.810086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:58.810382 | orchestrator | Sunday 23 March 2025 13:19:58 +0000 (0:00:00.208) 0:00:06.923 ********** 2025-03-23 13:19:59.011359 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:59.011550 | orchestrator | 2025-03-23 13:19:59.011990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:59.012617 | orchestrator | Sunday 23 March 2025 13:19:59 +0000 (0:00:00.206) 0:00:07.130 ********** 2025-03-23 13:19:59.226503 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:59.227127 | orchestrator | 2025-03-23 13:19:59.227494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:59.228546 | orchestrator | Sunday 23 March 2025 13:19:59 +0000 (0:00:00.212) 0:00:07.343 ********** 2025-03-23 13:19:59.450459 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:19:59.451379 | orchestrator | 2025-03-23 13:19:59.452267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:19:59.452729 | orchestrator | Sunday 23 March 2025 13:19:59 +0000 (0:00:00.227) 0:00:07.570 ********** 2025-03-23 13:20:00.066764 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:00.067162 | orchestrator | 2025-03-23 13:20:00.067192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:00.067211 | orchestrator | Sunday 23 March 2025 13:20:00 +0000 (0:00:00.614) 0:00:08.185 ********** 2025-03-23 13:20:00.327818 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:00.328382 | orchestrator | 2025-03-23 13:20:00.328664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:00.328694 | orchestrator | Sunday 23 March 2025 13:20:00 +0000 (0:00:00.262) 0:00:08.447 ********** 2025-03-23 13:20:00.535544 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:00.538137 | orchestrator | 2025-03-23 13:20:00.766402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:00.766517 | orchestrator | Sunday 23 March 2025 13:20:00 +0000 (0:00:00.204) 0:00:08.651 ********** 2025-03-23 13:20:00.766575 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:00.766640 | orchestrator | 2025-03-23 13:20:00.766657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:00.766675 | orchestrator | Sunday 23 March 2025 13:20:00 +0000 (0:00:00.231) 0:00:08.882 ********** 2025-03-23 13:20:01.462954 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-23 13:20:01.464120 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-23 13:20:01.465295 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-23 13:20:01.466102 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-23 13:20:01.466650 | orchestrator | 2025-03-23 13:20:01.467666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:01.468036 | orchestrator | Sunday 23 March 2025 13:20:01 +0000 (0:00:00.700) 0:00:09.582 ********** 2025-03-23 13:20:01.670797 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:01.671604 | orchestrator | 2025-03-23 13:20:01.672722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:01.675038 | orchestrator | Sunday 23 March 2025 13:20:01 +0000 (0:00:00.207) 0:00:09.790 ********** 2025-03-23 13:20:01.882981 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:01.884062 | orchestrator | 2025-03-23 13:20:01.884727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:01.885570 | orchestrator | Sunday 23 March 2025 13:20:01 +0000 (0:00:00.210) 0:00:10.000 ********** 2025-03-23 13:20:02.083852 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:02.084144 | orchestrator | 2025-03-23 13:20:02.084477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:02.085400 | orchestrator | Sunday 23 March 2025 13:20:02 +0000 (0:00:00.201) 0:00:10.202 ********** 2025-03-23 13:20:02.300588 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:02.301474 | orchestrator | 2025-03-23 13:20:02.302989 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-23 13:20:02.303829 | orchestrator | Sunday 23 March 2025 13:20:02 +0000 (0:00:00.216) 0:00:10.419 ********** 2025-03-23 13:20:02.457458 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:02.458203 | orchestrator | 2025-03-23 13:20:02.458431 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-23 13:20:02.459474 | orchestrator | Sunday 23 March 2025 13:20:02 +0000 (0:00:00.155) 0:00:10.575 ********** 2025-03-23 13:20:02.678647 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8229b7a0-df8d-5815-8245-22e3d24081aa'}}) 2025-03-23 13:20:02.680339 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0ab6ed36-da2c-5faf-8aed-224e80357d25'}}) 2025-03-23 13:20:02.680907 | orchestrator | 2025-03-23 13:20:02.680937 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-23 13:20:02.682097 | orchestrator | Sunday 23 March 2025 13:20:02 +0000 (0:00:00.220) 0:00:10.795 ********** 2025-03-23 13:20:04.860054 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'}) 2025-03-23 13:20:04.860294 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'}) 2025-03-23 13:20:04.860365 | orchestrator | 2025-03-23 13:20:04.860834 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-23 13:20:04.861008 | orchestrator | Sunday 23 March 2025 13:20:04 +0000 (0:00:02.184) 0:00:12.980 ********** 2025-03-23 13:20:04.999668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:04.999799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:05.000050 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:05.000433 | orchestrator | 2025-03-23 13:20:05.000755 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-23 13:20:05.001412 | orchestrator | Sunday 23 March 2025 13:20:04 +0000 (0:00:00.139) 0:00:13.120 ********** 2025-03-23 13:20:06.431619 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'}) 2025-03-23 13:20:06.432216 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'}) 2025-03-23 13:20:06.433176 | orchestrator | 2025-03-23 13:20:06.434398 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-23 13:20:06.434806 | orchestrator | Sunday 23 March 2025 13:20:06 +0000 (0:00:01.430) 0:00:14.550 ********** 2025-03-23 13:20:06.584512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:06.584581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:06.584590 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:06.586280 | orchestrator | 2025-03-23 13:20:06.587356 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-23 13:20:06.721452 | orchestrator | Sunday 23 March 2025 13:20:06 +0000 (0:00:00.154) 0:00:14.704 ********** 2025-03-23 13:20:06.721557 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:06.722192 | orchestrator | 2025-03-23 13:20:06.724136 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-23 13:20:06.877084 | orchestrator | Sunday 23 March 2025 13:20:06 +0000 (0:00:00.136) 0:00:14.841 ********** 2025-03-23 13:20:06.877197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:06.877882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:06.877914 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:06.878654 | orchestrator | 2025-03-23 13:20:06.880284 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-23 13:20:06.880582 | orchestrator | Sunday 23 March 2025 13:20:06 +0000 (0:00:00.155) 0:00:14.996 ********** 2025-03-23 13:20:07.013680 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:07.014365 | orchestrator | 2025-03-23 13:20:07.016499 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-23 13:20:07.016740 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.136) 0:00:15.133 ********** 2025-03-23 13:20:07.170464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:07.174169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:07.308524 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:07.308613 | orchestrator | 2025-03-23 13:20:07.308643 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-23 13:20:07.308659 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.157) 0:00:15.290 ********** 2025-03-23 13:20:07.308685 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:07.308755 | orchestrator | 2025-03-23 13:20:07.308778 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-23 13:20:07.309142 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.135) 0:00:15.425 ********** 2025-03-23 13:20:07.571186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:07.571676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:07.574506 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:07.574569 | orchestrator | 2025-03-23 13:20:07.575494 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-23 13:20:07.576040 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.265) 0:00:15.691 ********** 2025-03-23 13:20:07.714178 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:07.715121 | orchestrator | 2025-03-23 13:20:07.715641 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-23 13:20:07.716636 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.142) 0:00:15.833 ********** 2025-03-23 13:20:07.882102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:07.882281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:07.882477 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:07.882509 | orchestrator | 2025-03-23 13:20:07.882792 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-23 13:20:07.882823 | orchestrator | Sunday 23 March 2025 13:20:07 +0000 (0:00:00.169) 0:00:16.003 ********** 2025-03-23 13:20:08.042585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:08.045794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:08.048321 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:08.049129 | orchestrator | 2025-03-23 13:20:08.049163 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-23 13:20:08.049934 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.157) 0:00:16.160 ********** 2025-03-23 13:20:08.203832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:08.205107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:08.206123 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:08.207040 | orchestrator | 2025-03-23 13:20:08.207819 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-23 13:20:08.208282 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.163) 0:00:16.323 ********** 2025-03-23 13:20:08.342507 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:08.343434 | orchestrator | 2025-03-23 13:20:08.344312 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-23 13:20:08.345480 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.138) 0:00:16.461 ********** 2025-03-23 13:20:08.475842 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:08.476468 | orchestrator | 2025-03-23 13:20:08.476532 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-23 13:20:08.477325 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.132) 0:00:16.594 ********** 2025-03-23 13:20:08.594129 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:08.594560 | orchestrator | 2025-03-23 13:20:08.596183 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-23 13:20:08.597295 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.118) 0:00:16.712 ********** 2025-03-23 13:20:08.725789 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:20:08.726096 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-23 13:20:08.726672 | orchestrator | } 2025-03-23 13:20:08.727798 | orchestrator | 2025-03-23 13:20:08.728837 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-23 13:20:08.729499 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.133) 0:00:16.846 ********** 2025-03-23 13:20:08.861293 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:20:08.862141 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-23 13:20:08.863261 | orchestrator | } 2025-03-23 13:20:08.863968 | orchestrator | 2025-03-23 13:20:08.864719 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-23 13:20:08.865558 | orchestrator | Sunday 23 March 2025 13:20:08 +0000 (0:00:00.134) 0:00:16.981 ********** 2025-03-23 13:20:09.004483 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:20:09.005524 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-23 13:20:09.005831 | orchestrator | } 2025-03-23 13:20:09.007082 | orchestrator | 2025-03-23 13:20:09.008190 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-23 13:20:09.008606 | orchestrator | Sunday 23 March 2025 13:20:09 +0000 (0:00:00.142) 0:00:17.123 ********** 2025-03-23 13:20:10.050196 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:10.050444 | orchestrator | 2025-03-23 13:20:10.052356 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-23 13:20:10.053155 | orchestrator | Sunday 23 March 2025 13:20:10 +0000 (0:00:01.044) 0:00:18.168 ********** 2025-03-23 13:20:10.594964 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:10.595928 | orchestrator | 2025-03-23 13:20:10.595950 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-23 13:20:10.598459 | orchestrator | Sunday 23 March 2025 13:20:10 +0000 (0:00:00.544) 0:00:18.712 ********** 2025-03-23 13:20:11.158601 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:11.158786 | orchestrator | 2025-03-23 13:20:11.158813 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-23 13:20:11.162469 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.562) 0:00:19.275 ********** 2025-03-23 13:20:11.324625 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:11.324829 | orchestrator | 2025-03-23 13:20:11.327516 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-23 13:20:11.330973 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.166) 0:00:19.442 ********** 2025-03-23 13:20:11.461770 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:11.462382 | orchestrator | 2025-03-23 13:20:11.462422 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-23 13:20:11.462599 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.138) 0:00:19.580 ********** 2025-03-23 13:20:11.575240 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:11.576097 | orchestrator | 2025-03-23 13:20:11.577388 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-23 13:20:11.578128 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.114) 0:00:19.694 ********** 2025-03-23 13:20:11.727290 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:20:11.727494 | orchestrator |  "vgs_report": { 2025-03-23 13:20:11.729067 | orchestrator |  "vg": [] 2025-03-23 13:20:11.729816 | orchestrator |  } 2025-03-23 13:20:11.732582 | orchestrator | } 2025-03-23 13:20:11.733387 | orchestrator | 2025-03-23 13:20:11.734631 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-23 13:20:11.735730 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.151) 0:00:19.846 ********** 2025-03-23 13:20:11.859735 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:11.860382 | orchestrator | 2025-03-23 13:20:11.861807 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-23 13:20:11.861954 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.132) 0:00:19.978 ********** 2025-03-23 13:20:11.998137 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:11.998386 | orchestrator | 2025-03-23 13:20:11.999094 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-23 13:20:12.000137 | orchestrator | Sunday 23 March 2025 13:20:11 +0000 (0:00:00.138) 0:00:20.117 ********** 2025-03-23 13:20:12.162234 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:12.164793 | orchestrator | 2025-03-23 13:20:12.166607 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-23 13:20:12.166700 | orchestrator | Sunday 23 March 2025 13:20:12 +0000 (0:00:00.163) 0:00:20.280 ********** 2025-03-23 13:20:12.301799 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:12.302369 | orchestrator | 2025-03-23 13:20:12.303421 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-23 13:20:12.304417 | orchestrator | Sunday 23 March 2025 13:20:12 +0000 (0:00:00.139) 0:00:20.420 ********** 2025-03-23 13:20:12.644358 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:12.645443 | orchestrator | 2025-03-23 13:20:12.645481 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-23 13:20:12.646331 | orchestrator | Sunday 23 March 2025 13:20:12 +0000 (0:00:00.342) 0:00:20.763 ********** 2025-03-23 13:20:12.785963 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:12.786474 | orchestrator | 2025-03-23 13:20:12.787665 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-23 13:20:12.788686 | orchestrator | Sunday 23 March 2025 13:20:12 +0000 (0:00:00.141) 0:00:20.905 ********** 2025-03-23 13:20:12.932116 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:12.932464 | orchestrator | 2025-03-23 13:20:12.933567 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-23 13:20:12.934839 | orchestrator | Sunday 23 March 2025 13:20:12 +0000 (0:00:00.146) 0:00:21.051 ********** 2025-03-23 13:20:13.096464 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.097844 | orchestrator | 2025-03-23 13:20:13.099568 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-23 13:20:13.100836 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.162) 0:00:21.214 ********** 2025-03-23 13:20:13.247487 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.248116 | orchestrator | 2025-03-23 13:20:13.249397 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-23 13:20:13.249517 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.152) 0:00:21.366 ********** 2025-03-23 13:20:13.389569 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.391690 | orchestrator | 2025-03-23 13:20:13.391723 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-23 13:20:13.391745 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.139) 0:00:21.506 ********** 2025-03-23 13:20:13.536180 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.537344 | orchestrator | 2025-03-23 13:20:13.538119 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-23 13:20:13.538151 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.149) 0:00:21.655 ********** 2025-03-23 13:20:13.704314 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.704526 | orchestrator | 2025-03-23 13:20:13.704797 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-23 13:20:13.706062 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.165) 0:00:21.821 ********** 2025-03-23 13:20:13.858196 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:13.858430 | orchestrator | 2025-03-23 13:20:13.858886 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-23 13:20:13.859268 | orchestrator | Sunday 23 March 2025 13:20:13 +0000 (0:00:00.157) 0:00:21.978 ********** 2025-03-23 13:20:14.025830 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:14.026430 | orchestrator | 2025-03-23 13:20:14.026461 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-23 13:20:14.026483 | orchestrator | Sunday 23 March 2025 13:20:14 +0000 (0:00:00.166) 0:00:22.144 ********** 2025-03-23 13:20:14.192549 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:14.193953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:14.194831 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:14.196885 | orchestrator | 2025-03-23 13:20:14.198282 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-23 13:20:14.199468 | orchestrator | Sunday 23 March 2025 13:20:14 +0000 (0:00:00.165) 0:00:22.309 ********** 2025-03-23 13:20:14.362050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:14.362177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:14.363308 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:14.364537 | orchestrator | 2025-03-23 13:20:14.364615 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-23 13:20:14.365403 | orchestrator | Sunday 23 March 2025 13:20:14 +0000 (0:00:00.170) 0:00:22.480 ********** 2025-03-23 13:20:14.766005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:14.766447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:14.768011 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:14.768987 | orchestrator | 2025-03-23 13:20:14.769918 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-23 13:20:14.770619 | orchestrator | Sunday 23 March 2025 13:20:14 +0000 (0:00:00.403) 0:00:22.884 ********** 2025-03-23 13:20:14.950509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:14.950804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:14.952285 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:14.955058 | orchestrator | 2025-03-23 13:20:15.160475 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-23 13:20:15.160509 | orchestrator | Sunday 23 March 2025 13:20:14 +0000 (0:00:00.185) 0:00:23.070 ********** 2025-03-23 13:20:15.160531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:15.160966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:15.162469 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:15.163535 | orchestrator | 2025-03-23 13:20:15.164888 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-23 13:20:15.165288 | orchestrator | Sunday 23 March 2025 13:20:15 +0000 (0:00:00.209) 0:00:23.279 ********** 2025-03-23 13:20:15.343178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:15.343733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:15.344406 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:15.345439 | orchestrator | 2025-03-23 13:20:15.346336 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-23 13:20:15.346984 | orchestrator | Sunday 23 March 2025 13:20:15 +0000 (0:00:00.182) 0:00:23.462 ********** 2025-03-23 13:20:15.523554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:15.523691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:15.525185 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:15.526721 | orchestrator | 2025-03-23 13:20:15.527920 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-23 13:20:15.529109 | orchestrator | Sunday 23 March 2025 13:20:15 +0000 (0:00:00.179) 0:00:23.642 ********** 2025-03-23 13:20:15.709929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:15.711743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:15.714560 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:15.714849 | orchestrator | 2025-03-23 13:20:15.714889 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-23 13:20:15.716425 | orchestrator | Sunday 23 March 2025 13:20:15 +0000 (0:00:00.186) 0:00:23.828 ********** 2025-03-23 13:20:16.335339 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:16.335612 | orchestrator | 2025-03-23 13:20:16.336590 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-23 13:20:16.337318 | orchestrator | Sunday 23 March 2025 13:20:16 +0000 (0:00:00.624) 0:00:24.453 ********** 2025-03-23 13:20:16.857094 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:16.857849 | orchestrator | 2025-03-23 13:20:16.860301 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-23 13:20:17.022591 | orchestrator | Sunday 23 March 2025 13:20:16 +0000 (0:00:00.521) 0:00:24.974 ********** 2025-03-23 13:20:17.022704 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:20:17.023364 | orchestrator | 2025-03-23 13:20:17.024441 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-23 13:20:17.026135 | orchestrator | Sunday 23 March 2025 13:20:17 +0000 (0:00:00.165) 0:00:25.140 ********** 2025-03-23 13:20:17.219652 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'vg_name': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'}) 2025-03-23 13:20:17.221860 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'vg_name': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'}) 2025-03-23 13:20:17.222632 | orchestrator | 2025-03-23 13:20:17.223533 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-23 13:20:17.224984 | orchestrator | Sunday 23 March 2025 13:20:17 +0000 (0:00:00.195) 0:00:25.336 ********** 2025-03-23 13:20:17.617438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:17.618565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:17.619182 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:17.621353 | orchestrator | 2025-03-23 13:20:17.831179 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-23 13:20:17.831298 | orchestrator | Sunday 23 March 2025 13:20:17 +0000 (0:00:00.399) 0:00:25.736 ********** 2025-03-23 13:20:17.831327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:17.833211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:17.835162 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:17.835219 | orchestrator | 2025-03-23 13:20:17.835519 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-23 13:20:17.836362 | orchestrator | Sunday 23 March 2025 13:20:17 +0000 (0:00:00.213) 0:00:25.950 ********** 2025-03-23 13:20:18.031066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'})  2025-03-23 13:20:18.032200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'})  2025-03-23 13:20:18.032235 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:20:18.032606 | orchestrator | 2025-03-23 13:20:18.033388 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-23 13:20:18.035634 | orchestrator | Sunday 23 March 2025 13:20:18 +0000 (0:00:00.200) 0:00:26.150 ********** 2025-03-23 13:20:18.782124 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:20:18.784003 | orchestrator |  "lvm_report": { 2025-03-23 13:20:18.786948 | orchestrator |  "lv": [ 2025-03-23 13:20:18.786976 | orchestrator |  { 2025-03-23 13:20:18.787621 | orchestrator |  "lv_name": "osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25", 2025-03-23 13:20:18.788497 | orchestrator |  "vg_name": "ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25" 2025-03-23 13:20:18.789145 | orchestrator |  }, 2025-03-23 13:20:18.790003 | orchestrator |  { 2025-03-23 13:20:18.790430 | orchestrator |  "lv_name": "osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa", 2025-03-23 13:20:18.791062 | orchestrator |  "vg_name": "ceph-8229b7a0-df8d-5815-8245-22e3d24081aa" 2025-03-23 13:20:18.791833 | orchestrator |  } 2025-03-23 13:20:18.792588 | orchestrator |  ], 2025-03-23 13:20:18.793076 | orchestrator |  "pv": [ 2025-03-23 13:20:18.793445 | orchestrator |  { 2025-03-23 13:20:18.793805 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-23 13:20:18.794146 | orchestrator |  "vg_name": "ceph-8229b7a0-df8d-5815-8245-22e3d24081aa" 2025-03-23 13:20:18.795064 | orchestrator |  }, 2025-03-23 13:20:18.795222 | orchestrator |  { 2025-03-23 13:20:18.795564 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-23 13:20:18.795859 | orchestrator |  "vg_name": "ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25" 2025-03-23 13:20:18.796191 | orchestrator |  } 2025-03-23 13:20:18.796499 | orchestrator |  ] 2025-03-23 13:20:18.796911 | orchestrator |  } 2025-03-23 13:20:18.797198 | orchestrator | } 2025-03-23 13:20:18.797969 | orchestrator | 2025-03-23 13:20:18.798178 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-23 13:20:18.798403 | orchestrator | 2025-03-23 13:20:18.798816 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:20:18.799107 | orchestrator | Sunday 23 March 2025 13:20:18 +0000 (0:00:00.750) 0:00:26.900 ********** 2025-03-23 13:20:19.400644 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-23 13:20:19.400928 | orchestrator | 2025-03-23 13:20:19.402303 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:20:19.649917 | orchestrator | Sunday 23 March 2025 13:20:19 +0000 (0:00:00.618) 0:00:27.519 ********** 2025-03-23 13:20:19.650012 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:19.650381 | orchestrator | 2025-03-23 13:20:19.650412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:19.650433 | orchestrator | Sunday 23 March 2025 13:20:19 +0000 (0:00:00.250) 0:00:27.769 ********** 2025-03-23 13:20:20.156892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-23 13:20:20.157790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-23 13:20:20.159750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-23 13:20:20.159863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-23 13:20:20.161620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-23 13:20:20.161729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-23 13:20:20.162856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-23 13:20:20.163716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-23 13:20:20.164600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-23 13:20:20.165544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-23 13:20:20.166302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-23 13:20:20.166903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-23 13:20:20.167246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-23 13:20:20.167936 | orchestrator | 2025-03-23 13:20:20.168585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:20.168825 | orchestrator | Sunday 23 March 2025 13:20:20 +0000 (0:00:00.506) 0:00:28.275 ********** 2025-03-23 13:20:20.364515 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:20.367219 | orchestrator | 2025-03-23 13:20:20.367929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:20.367969 | orchestrator | Sunday 23 March 2025 13:20:20 +0000 (0:00:00.206) 0:00:28.481 ********** 2025-03-23 13:20:20.580679 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:20.581773 | orchestrator | 2025-03-23 13:20:20.582620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:20.583492 | orchestrator | Sunday 23 March 2025 13:20:20 +0000 (0:00:00.216) 0:00:28.698 ********** 2025-03-23 13:20:20.804653 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:20.805169 | orchestrator | 2025-03-23 13:20:20.805210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:20.805992 | orchestrator | Sunday 23 March 2025 13:20:20 +0000 (0:00:00.224) 0:00:28.923 ********** 2025-03-23 13:20:21.014677 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:21.015225 | orchestrator | 2025-03-23 13:20:21.246350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:21.246469 | orchestrator | Sunday 23 March 2025 13:20:21 +0000 (0:00:00.207) 0:00:29.130 ********** 2025-03-23 13:20:21.246502 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:21.246595 | orchestrator | 2025-03-23 13:20:21.246619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:21.247362 | orchestrator | Sunday 23 March 2025 13:20:21 +0000 (0:00:00.232) 0:00:29.363 ********** 2025-03-23 13:20:21.454995 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:21.455322 | orchestrator | 2025-03-23 13:20:21.456014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:21.457009 | orchestrator | Sunday 23 March 2025 13:20:21 +0000 (0:00:00.210) 0:00:29.574 ********** 2025-03-23 13:20:21.880174 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:21.880402 | orchestrator | 2025-03-23 13:20:21.880998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:21.882551 | orchestrator | Sunday 23 March 2025 13:20:21 +0000 (0:00:00.425) 0:00:29.999 ********** 2025-03-23 13:20:22.069413 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:22.069899 | orchestrator | 2025-03-23 13:20:22.073194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:22.074887 | orchestrator | Sunday 23 March 2025 13:20:22 +0000 (0:00:00.186) 0:00:30.186 ********** 2025-03-23 13:20:22.508427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74) 2025-03-23 13:20:22.509160 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74) 2025-03-23 13:20:22.510690 | orchestrator | 2025-03-23 13:20:22.511454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:22.512501 | orchestrator | Sunday 23 March 2025 13:20:22 +0000 (0:00:00.440) 0:00:30.626 ********** 2025-03-23 13:20:22.956790 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6) 2025-03-23 13:20:22.958422 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6) 2025-03-23 13:20:22.959512 | orchestrator | 2025-03-23 13:20:22.960633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:22.961912 | orchestrator | Sunday 23 March 2025 13:20:22 +0000 (0:00:00.448) 0:00:31.075 ********** 2025-03-23 13:20:23.431616 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd) 2025-03-23 13:20:23.432450 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd) 2025-03-23 13:20:23.433231 | orchestrator | 2025-03-23 13:20:23.434308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:23.435136 | orchestrator | Sunday 23 March 2025 13:20:23 +0000 (0:00:00.474) 0:00:31.549 ********** 2025-03-23 13:20:23.917919 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5) 2025-03-23 13:20:23.918095 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5) 2025-03-23 13:20:23.918645 | orchestrator | 2025-03-23 13:20:23.919420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:23.920020 | orchestrator | Sunday 23 March 2025 13:20:23 +0000 (0:00:00.487) 0:00:32.037 ********** 2025-03-23 13:20:24.268907 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:20:24.270221 | orchestrator | 2025-03-23 13:20:24.271127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:24.272290 | orchestrator | Sunday 23 March 2025 13:20:24 +0000 (0:00:00.350) 0:00:32.387 ********** 2025-03-23 13:20:24.776978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-23 13:20:24.777686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-23 13:20:24.778973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-23 13:20:24.780089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-23 13:20:24.780864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-23 13:20:24.781591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-23 13:20:24.782123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-23 13:20:24.782525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-23 13:20:24.783299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-23 13:20:24.783442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-23 13:20:24.784295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-23 13:20:24.784514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-23 13:20:24.785487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-23 13:20:24.785751 | orchestrator | 2025-03-23 13:20:24.786229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:24.786543 | orchestrator | Sunday 23 March 2025 13:20:24 +0000 (0:00:00.506) 0:00:32.894 ********** 2025-03-23 13:20:24.980492 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:24.981154 | orchestrator | 2025-03-23 13:20:24.981548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:24.982128 | orchestrator | Sunday 23 March 2025 13:20:24 +0000 (0:00:00.205) 0:00:33.100 ********** 2025-03-23 13:20:25.399481 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:25.400001 | orchestrator | 2025-03-23 13:20:25.401073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:25.401926 | orchestrator | Sunday 23 March 2025 13:20:25 +0000 (0:00:00.416) 0:00:33.517 ********** 2025-03-23 13:20:25.599034 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:25.599124 | orchestrator | 2025-03-23 13:20:25.599385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:25.599829 | orchestrator | Sunday 23 March 2025 13:20:25 +0000 (0:00:00.200) 0:00:33.717 ********** 2025-03-23 13:20:25.809654 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:25.810222 | orchestrator | 2025-03-23 13:20:25.810280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:25.810929 | orchestrator | Sunday 23 March 2025 13:20:25 +0000 (0:00:00.211) 0:00:33.929 ********** 2025-03-23 13:20:26.059145 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:26.060151 | orchestrator | 2025-03-23 13:20:26.060339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:26.060434 | orchestrator | Sunday 23 March 2025 13:20:26 +0000 (0:00:00.249) 0:00:34.178 ********** 2025-03-23 13:20:26.266512 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:26.266873 | orchestrator | 2025-03-23 13:20:26.268399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:26.269412 | orchestrator | Sunday 23 March 2025 13:20:26 +0000 (0:00:00.207) 0:00:34.386 ********** 2025-03-23 13:20:26.478099 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:26.478604 | orchestrator | 2025-03-23 13:20:26.478719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:26.478753 | orchestrator | Sunday 23 March 2025 13:20:26 +0000 (0:00:00.208) 0:00:34.595 ********** 2025-03-23 13:20:26.764232 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:26.765619 | orchestrator | 2025-03-23 13:20:26.765662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:26.766727 | orchestrator | Sunday 23 March 2025 13:20:26 +0000 (0:00:00.285) 0:00:34.880 ********** 2025-03-23 13:20:27.457789 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-23 13:20:27.458640 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-23 13:20:27.458677 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-23 13:20:27.460059 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-23 13:20:27.462005 | orchestrator | 2025-03-23 13:20:27.462505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:27.463627 | orchestrator | Sunday 23 March 2025 13:20:27 +0000 (0:00:00.695) 0:00:35.575 ********** 2025-03-23 13:20:27.716570 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:27.717303 | orchestrator | 2025-03-23 13:20:27.718248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:27.718619 | orchestrator | Sunday 23 March 2025 13:20:27 +0000 (0:00:00.261) 0:00:35.837 ********** 2025-03-23 13:20:27.949425 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:27.950232 | orchestrator | 2025-03-23 13:20:27.951198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:27.951349 | orchestrator | Sunday 23 March 2025 13:20:27 +0000 (0:00:00.229) 0:00:36.066 ********** 2025-03-23 13:20:28.155438 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:28.156061 | orchestrator | 2025-03-23 13:20:28.156093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:28.157482 | orchestrator | Sunday 23 March 2025 13:20:28 +0000 (0:00:00.207) 0:00:36.274 ********** 2025-03-23 13:20:28.858726 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:28.859451 | orchestrator | 2025-03-23 13:20:28.860930 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-23 13:20:28.864461 | orchestrator | Sunday 23 March 2025 13:20:28 +0000 (0:00:00.702) 0:00:36.976 ********** 2025-03-23 13:20:29.015799 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:29.017116 | orchestrator | 2025-03-23 13:20:29.017539 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-23 13:20:29.018591 | orchestrator | Sunday 23 March 2025 13:20:29 +0000 (0:00:00.158) 0:00:37.135 ********** 2025-03-23 13:20:29.270121 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}}) 2025-03-23 13:20:29.271101 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}}) 2025-03-23 13:20:29.272709 | orchestrator | 2025-03-23 13:20:29.272739 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-23 13:20:31.130153 | orchestrator | Sunday 23 March 2025 13:20:29 +0000 (0:00:00.253) 0:00:37.388 ********** 2025-03-23 13:20:31.130303 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}) 2025-03-23 13:20:31.132421 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}) 2025-03-23 13:20:31.132781 | orchestrator | 2025-03-23 13:20:31.133580 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-23 13:20:31.134524 | orchestrator | Sunday 23 March 2025 13:20:31 +0000 (0:00:01.858) 0:00:39.246 ********** 2025-03-23 13:20:31.314727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:31.315827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:31.317118 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:31.319500 | orchestrator | 2025-03-23 13:20:31.319711 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-23 13:20:32.656112 | orchestrator | Sunday 23 March 2025 13:20:31 +0000 (0:00:00.187) 0:00:39.434 ********** 2025-03-23 13:20:32.656336 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}) 2025-03-23 13:20:32.656420 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}) 2025-03-23 13:20:32.656446 | orchestrator | 2025-03-23 13:20:32.656888 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-23 13:20:32.657481 | orchestrator | Sunday 23 March 2025 13:20:32 +0000 (0:00:01.339) 0:00:40.774 ********** 2025-03-23 13:20:32.832223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:32.832789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:32.833447 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:32.834642 | orchestrator | 2025-03-23 13:20:32.836974 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-23 13:20:32.837004 | orchestrator | Sunday 23 March 2025 13:20:32 +0000 (0:00:00.176) 0:00:40.950 ********** 2025-03-23 13:20:33.025345 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:33.213771 | orchestrator | 2025-03-23 13:20:33.213848 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-23 13:20:33.213892 | orchestrator | Sunday 23 March 2025 13:20:33 +0000 (0:00:00.191) 0:00:41.142 ********** 2025-03-23 13:20:33.213920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:33.214680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:33.215850 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:33.216995 | orchestrator | 2025-03-23 13:20:33.217611 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-23 13:20:33.218632 | orchestrator | Sunday 23 March 2025 13:20:33 +0000 (0:00:00.188) 0:00:41.330 ********** 2025-03-23 13:20:33.554564 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:33.555609 | orchestrator | 2025-03-23 13:20:33.557078 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-23 13:20:33.557791 | orchestrator | Sunday 23 March 2025 13:20:33 +0000 (0:00:00.343) 0:00:41.674 ********** 2025-03-23 13:20:33.721479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:33.723474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:33.725010 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:33.726528 | orchestrator | 2025-03-23 13:20:33.727834 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-23 13:20:33.728882 | orchestrator | Sunday 23 March 2025 13:20:33 +0000 (0:00:00.166) 0:00:41.840 ********** 2025-03-23 13:20:33.883851 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:33.884773 | orchestrator | 2025-03-23 13:20:33.886077 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-23 13:20:33.891858 | orchestrator | Sunday 23 March 2025 13:20:33 +0000 (0:00:00.162) 0:00:42.003 ********** 2025-03-23 13:20:34.060050 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:34.061560 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:34.064455 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:34.202227 | orchestrator | 2025-03-23 13:20:34.202298 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-23 13:20:34.202316 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.176) 0:00:42.179 ********** 2025-03-23 13:20:34.202337 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:34.203416 | orchestrator | 2025-03-23 13:20:34.204612 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-23 13:20:34.206562 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.141) 0:00:42.321 ********** 2025-03-23 13:20:34.426863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:34.427930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:34.427963 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:34.428641 | orchestrator | 2025-03-23 13:20:34.429136 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-23 13:20:34.431352 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.224) 0:00:42.546 ********** 2025-03-23 13:20:34.623481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:34.625501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:34.627952 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:34.628552 | orchestrator | 2025-03-23 13:20:34.629848 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-23 13:20:34.630095 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.197) 0:00:42.743 ********** 2025-03-23 13:20:34.813999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:34.814528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:34.815588 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:34.816579 | orchestrator | 2025-03-23 13:20:34.817595 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-23 13:20:34.818222 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.189) 0:00:42.932 ********** 2025-03-23 13:20:34.991194 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:34.991524 | orchestrator | 2025-03-23 13:20:34.991575 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-23 13:20:34.991746 | orchestrator | Sunday 23 March 2025 13:20:34 +0000 (0:00:00.177) 0:00:43.109 ********** 2025-03-23 13:20:35.136931 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:35.137621 | orchestrator | 2025-03-23 13:20:35.137661 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-23 13:20:35.138130 | orchestrator | Sunday 23 March 2025 13:20:35 +0000 (0:00:00.145) 0:00:43.255 ********** 2025-03-23 13:20:35.288798 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:35.292345 | orchestrator | 2025-03-23 13:20:35.687658 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-23 13:20:35.687743 | orchestrator | Sunday 23 March 2025 13:20:35 +0000 (0:00:00.150) 0:00:43.405 ********** 2025-03-23 13:20:35.687772 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:20:35.688043 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-23 13:20:35.689149 | orchestrator | } 2025-03-23 13:20:35.691480 | orchestrator | 2025-03-23 13:20:35.691927 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-23 13:20:35.691962 | orchestrator | Sunday 23 March 2025 13:20:35 +0000 (0:00:00.400) 0:00:43.805 ********** 2025-03-23 13:20:35.856869 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:20:35.858756 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-23 13:20:35.861734 | orchestrator | } 2025-03-23 13:20:35.863752 | orchestrator | 2025-03-23 13:20:35.864393 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-23 13:20:35.865084 | orchestrator | Sunday 23 March 2025 13:20:35 +0000 (0:00:00.170) 0:00:43.975 ********** 2025-03-23 13:20:36.022238 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:20:36.026475 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-23 13:20:36.563647 | orchestrator | } 2025-03-23 13:20:36.563753 | orchestrator | 2025-03-23 13:20:36.563770 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-23 13:20:36.563784 | orchestrator | Sunday 23 March 2025 13:20:36 +0000 (0:00:00.163) 0:00:44.139 ********** 2025-03-23 13:20:36.563814 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:37.123948 | orchestrator | 2025-03-23 13:20:37.124098 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-23 13:20:37.124119 | orchestrator | Sunday 23 March 2025 13:20:36 +0000 (0:00:00.541) 0:00:44.680 ********** 2025-03-23 13:20:37.124151 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:37.124231 | orchestrator | 2025-03-23 13:20:37.124594 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-23 13:20:37.124624 | orchestrator | Sunday 23 March 2025 13:20:37 +0000 (0:00:00.560) 0:00:45.241 ********** 2025-03-23 13:20:37.696917 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:37.698085 | orchestrator | 2025-03-23 13:20:37.698636 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-23 13:20:37.699552 | orchestrator | Sunday 23 March 2025 13:20:37 +0000 (0:00:00.573) 0:00:45.815 ********** 2025-03-23 13:20:37.856425 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:37.858239 | orchestrator | 2025-03-23 13:20:37.858786 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-23 13:20:37.862307 | orchestrator | Sunday 23 March 2025 13:20:37 +0000 (0:00:00.160) 0:00:45.975 ********** 2025-03-23 13:20:37.996329 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:37.997413 | orchestrator | 2025-03-23 13:20:37.998196 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-23 13:20:37.998713 | orchestrator | Sunday 23 March 2025 13:20:37 +0000 (0:00:00.138) 0:00:46.113 ********** 2025-03-23 13:20:38.118653 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:38.122113 | orchestrator | 2025-03-23 13:20:38.123039 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-23 13:20:38.123069 | orchestrator | Sunday 23 March 2025 13:20:38 +0000 (0:00:00.123) 0:00:46.237 ********** 2025-03-23 13:20:38.277783 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:20:38.277957 | orchestrator |  "vgs_report": { 2025-03-23 13:20:38.278812 | orchestrator |  "vg": [] 2025-03-23 13:20:38.279421 | orchestrator |  } 2025-03-23 13:20:38.281225 | orchestrator | } 2025-03-23 13:20:38.281590 | orchestrator | 2025-03-23 13:20:38.284721 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-23 13:20:38.285244 | orchestrator | Sunday 23 March 2025 13:20:38 +0000 (0:00:00.159) 0:00:46.397 ********** 2025-03-23 13:20:38.444602 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:38.445165 | orchestrator | 2025-03-23 13:20:38.445806 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-23 13:20:38.446257 | orchestrator | Sunday 23 March 2025 13:20:38 +0000 (0:00:00.166) 0:00:46.563 ********** 2025-03-23 13:20:38.845512 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:38.845769 | orchestrator | 2025-03-23 13:20:38.846534 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-23 13:20:38.847566 | orchestrator | Sunday 23 March 2025 13:20:38 +0000 (0:00:00.400) 0:00:46.964 ********** 2025-03-23 13:20:39.001231 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.001428 | orchestrator | 2025-03-23 13:20:39.002744 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-23 13:20:39.003827 | orchestrator | Sunday 23 March 2025 13:20:38 +0000 (0:00:00.154) 0:00:47.119 ********** 2025-03-23 13:20:39.165628 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.167833 | orchestrator | 2025-03-23 13:20:39.170722 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-23 13:20:39.310711 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.165) 0:00:47.285 ********** 2025-03-23 13:20:39.310785 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.311526 | orchestrator | 2025-03-23 13:20:39.311561 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-23 13:20:39.472384 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.145) 0:00:47.430 ********** 2025-03-23 13:20:39.472454 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.472911 | orchestrator | 2025-03-23 13:20:39.473361 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-23 13:20:39.473391 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.162) 0:00:47.592 ********** 2025-03-23 13:20:39.620896 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.622943 | orchestrator | 2025-03-23 13:20:39.757106 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-23 13:20:39.757188 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.146) 0:00:47.738 ********** 2025-03-23 13:20:39.757216 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.757983 | orchestrator | 2025-03-23 13:20:39.758760 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-23 13:20:39.759361 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.137) 0:00:47.876 ********** 2025-03-23 13:20:39.911612 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:39.912516 | orchestrator | 2025-03-23 13:20:39.912796 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-23 13:20:39.912992 | orchestrator | Sunday 23 March 2025 13:20:39 +0000 (0:00:00.148) 0:00:48.025 ********** 2025-03-23 13:20:40.043592 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:40.043752 | orchestrator | 2025-03-23 13:20:40.044789 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-23 13:20:40.045738 | orchestrator | Sunday 23 March 2025 13:20:40 +0000 (0:00:00.137) 0:00:48.162 ********** 2025-03-23 13:20:40.204651 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:40.205084 | orchestrator | 2025-03-23 13:20:40.206185 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-23 13:20:40.206725 | orchestrator | Sunday 23 March 2025 13:20:40 +0000 (0:00:00.161) 0:00:48.323 ********** 2025-03-23 13:20:40.372611 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:40.375420 | orchestrator | 2025-03-23 13:20:40.376003 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-23 13:20:40.376559 | orchestrator | Sunday 23 March 2025 13:20:40 +0000 (0:00:00.166) 0:00:48.490 ********** 2025-03-23 13:20:40.517838 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:40.517969 | orchestrator | 2025-03-23 13:20:40.518328 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-23 13:20:40.518436 | orchestrator | Sunday 23 March 2025 13:20:40 +0000 (0:00:00.147) 0:00:48.637 ********** 2025-03-23 13:20:40.915938 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:40.916069 | orchestrator | 2025-03-23 13:20:40.916093 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-23 13:20:40.917073 | orchestrator | Sunday 23 March 2025 13:20:40 +0000 (0:00:00.397) 0:00:49.035 ********** 2025-03-23 13:20:41.116837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:41.117761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:41.120454 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:41.125770 | orchestrator | 2025-03-23 13:20:41.125803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-23 13:20:41.283866 | orchestrator | Sunday 23 March 2025 13:20:41 +0000 (0:00:00.199) 0:00:49.235 ********** 2025-03-23 13:20:41.283983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:41.284091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:41.285442 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:41.285507 | orchestrator | 2025-03-23 13:20:41.288781 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-23 13:20:41.289236 | orchestrator | Sunday 23 March 2025 13:20:41 +0000 (0:00:00.167) 0:00:49.403 ********** 2025-03-23 13:20:41.460685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:41.461259 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:41.463357 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:41.463563 | orchestrator | 2025-03-23 13:20:41.464326 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-23 13:20:41.464652 | orchestrator | Sunday 23 March 2025 13:20:41 +0000 (0:00:00.175) 0:00:49.578 ********** 2025-03-23 13:20:41.656658 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:41.658632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:41.658672 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:41.659707 | orchestrator | 2025-03-23 13:20:41.660513 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-23 13:20:41.661657 | orchestrator | Sunday 23 March 2025 13:20:41 +0000 (0:00:00.196) 0:00:49.775 ********** 2025-03-23 13:20:41.854501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:41.856621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:41.857367 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:41.858542 | orchestrator | 2025-03-23 13:20:41.859828 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-23 13:20:41.860685 | orchestrator | Sunday 23 March 2025 13:20:41 +0000 (0:00:00.197) 0:00:49.973 ********** 2025-03-23 13:20:42.025514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:42.025748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:42.025794 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:42.027509 | orchestrator | 2025-03-23 13:20:42.028357 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-23 13:20:42.028614 | orchestrator | Sunday 23 March 2025 13:20:42 +0000 (0:00:00.169) 0:00:50.143 ********** 2025-03-23 13:20:42.198780 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:42.198914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:42.200080 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:42.201131 | orchestrator | 2025-03-23 13:20:42.201650 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-23 13:20:42.375097 | orchestrator | Sunday 23 March 2025 13:20:42 +0000 (0:00:00.174) 0:00:50.318 ********** 2025-03-23 13:20:42.375164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:42.375237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:42.377332 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:42.377438 | orchestrator | 2025-03-23 13:20:42.378852 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-23 13:20:42.378880 | orchestrator | Sunday 23 March 2025 13:20:42 +0000 (0:00:00.177) 0:00:50.495 ********** 2025-03-23 13:20:42.899784 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:42.900647 | orchestrator | 2025-03-23 13:20:42.901730 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-23 13:20:42.901767 | orchestrator | Sunday 23 March 2025 13:20:42 +0000 (0:00:00.523) 0:00:51.018 ********** 2025-03-23 13:20:43.457521 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:43.459891 | orchestrator | 2025-03-23 13:20:43.460372 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-23 13:20:43.460429 | orchestrator | Sunday 23 March 2025 13:20:43 +0000 (0:00:00.555) 0:00:51.574 ********** 2025-03-23 13:20:43.845477 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:20:43.846442 | orchestrator | 2025-03-23 13:20:43.848245 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-23 13:20:43.849952 | orchestrator | Sunday 23 March 2025 13:20:43 +0000 (0:00:00.389) 0:00:51.964 ********** 2025-03-23 13:20:44.045756 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'vg_name': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}) 2025-03-23 13:20:44.046960 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'vg_name': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}) 2025-03-23 13:20:44.048229 | orchestrator | 2025-03-23 13:20:44.049480 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-23 13:20:44.050595 | orchestrator | Sunday 23 March 2025 13:20:44 +0000 (0:00:00.200) 0:00:52.164 ********** 2025-03-23 13:20:44.217552 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:44.217679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:44.218197 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:44.218786 | orchestrator | 2025-03-23 13:20:44.219027 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-23 13:20:44.219417 | orchestrator | Sunday 23 March 2025 13:20:44 +0000 (0:00:00.171) 0:00:52.336 ********** 2025-03-23 13:20:44.415817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:44.416753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:44.418104 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:44.421451 | orchestrator | 2025-03-23 13:20:44.422227 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-23 13:20:44.422750 | orchestrator | Sunday 23 March 2025 13:20:44 +0000 (0:00:00.198) 0:00:52.534 ********** 2025-03-23 13:20:44.605973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'})  2025-03-23 13:20:44.607342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'})  2025-03-23 13:20:44.608598 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:20:44.609920 | orchestrator | 2025-03-23 13:20:44.610965 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-23 13:20:44.611451 | orchestrator | Sunday 23 March 2025 13:20:44 +0000 (0:00:00.190) 0:00:52.725 ********** 2025-03-23 13:20:45.520152 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:20:45.522965 | orchestrator |  "lvm_report": { 2025-03-23 13:20:45.527447 | orchestrator |  "lv": [ 2025-03-23 13:20:45.527677 | orchestrator |  { 2025-03-23 13:20:45.528476 | orchestrator |  "lv_name": "osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233", 2025-03-23 13:20:45.531490 | orchestrator |  "vg_name": "ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233" 2025-03-23 13:20:45.532241 | orchestrator |  }, 2025-03-23 13:20:45.532573 | orchestrator |  { 2025-03-23 13:20:45.533023 | orchestrator |  "lv_name": "osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb", 2025-03-23 13:20:45.534487 | orchestrator |  "vg_name": "ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb" 2025-03-23 13:20:45.534653 | orchestrator |  } 2025-03-23 13:20:45.534852 | orchestrator |  ], 2025-03-23 13:20:45.535358 | orchestrator |  "pv": [ 2025-03-23 13:20:45.535664 | orchestrator |  { 2025-03-23 13:20:45.535842 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-23 13:20:45.536388 | orchestrator |  "vg_name": "ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233" 2025-03-23 13:20:45.538091 | orchestrator |  }, 2025-03-23 13:20:45.538176 | orchestrator |  { 2025-03-23 13:20:45.538198 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-23 13:20:45.538212 | orchestrator |  "vg_name": "ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb" 2025-03-23 13:20:45.538227 | orchestrator |  } 2025-03-23 13:20:45.538242 | orchestrator |  ] 2025-03-23 13:20:45.538260 | orchestrator |  } 2025-03-23 13:20:45.538466 | orchestrator | } 2025-03-23 13:20:45.539028 | orchestrator | 2025-03-23 13:20:45.539315 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-23 13:20:45.539343 | orchestrator | 2025-03-23 13:20:45.539624 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-23 13:20:45.540082 | orchestrator | Sunday 23 March 2025 13:20:45 +0000 (0:00:00.913) 0:00:53.638 ********** 2025-03-23 13:20:45.831610 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-23 13:20:45.833180 | orchestrator | 2025-03-23 13:20:46.102923 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-23 13:20:46.103008 | orchestrator | Sunday 23 March 2025 13:20:45 +0000 (0:00:00.312) 0:00:53.951 ********** 2025-03-23 13:20:46.103030 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:20:46.104584 | orchestrator | 2025-03-23 13:20:46.105623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:46.107069 | orchestrator | Sunday 23 March 2025 13:20:46 +0000 (0:00:00.270) 0:00:54.222 ********** 2025-03-23 13:20:46.583350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-23 13:20:46.584607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-23 13:20:46.587651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-23 13:20:46.588604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-23 13:20:46.589353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-23 13:20:46.589802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-23 13:20:46.590537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-23 13:20:46.590907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-23 13:20:46.591690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-23 13:20:46.592678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-23 13:20:46.593406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-23 13:20:46.594440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-23 13:20:46.595070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-23 13:20:46.595829 | orchestrator | 2025-03-23 13:20:46.596287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:46.596640 | orchestrator | Sunday 23 March 2025 13:20:46 +0000 (0:00:00.479) 0:00:54.701 ********** 2025-03-23 13:20:46.793470 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:46.793649 | orchestrator | 2025-03-23 13:20:46.794441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:46.795469 | orchestrator | Sunday 23 March 2025 13:20:46 +0000 (0:00:00.210) 0:00:54.912 ********** 2025-03-23 13:20:47.023974 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:47.024558 | orchestrator | 2025-03-23 13:20:47.025359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:47.026105 | orchestrator | Sunday 23 March 2025 13:20:47 +0000 (0:00:00.230) 0:00:55.142 ********** 2025-03-23 13:20:47.233679 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:47.233984 | orchestrator | 2025-03-23 13:20:47.234239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:47.235218 | orchestrator | Sunday 23 March 2025 13:20:47 +0000 (0:00:00.210) 0:00:55.352 ********** 2025-03-23 13:20:47.455009 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:47.455119 | orchestrator | 2025-03-23 13:20:47.455225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:47.455250 | orchestrator | Sunday 23 March 2025 13:20:47 +0000 (0:00:00.221) 0:00:55.574 ********** 2025-03-23 13:20:48.121096 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:48.123353 | orchestrator | 2025-03-23 13:20:48.123654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:48.123686 | orchestrator | Sunday 23 March 2025 13:20:48 +0000 (0:00:00.664) 0:00:56.239 ********** 2025-03-23 13:20:48.331429 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:48.332315 | orchestrator | 2025-03-23 13:20:48.333180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:48.333985 | orchestrator | Sunday 23 March 2025 13:20:48 +0000 (0:00:00.210) 0:00:56.450 ********** 2025-03-23 13:20:48.607198 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:48.607477 | orchestrator | 2025-03-23 13:20:48.607947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:48.608929 | orchestrator | Sunday 23 March 2025 13:20:48 +0000 (0:00:00.273) 0:00:56.723 ********** 2025-03-23 13:20:48.821366 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:48.822604 | orchestrator | 2025-03-23 13:20:48.823632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:48.825374 | orchestrator | Sunday 23 March 2025 13:20:48 +0000 (0:00:00.216) 0:00:56.940 ********** 2025-03-23 13:20:49.274503 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81) 2025-03-23 13:20:49.275484 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81) 2025-03-23 13:20:49.276172 | orchestrator | 2025-03-23 13:20:49.278857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:49.768067 | orchestrator | Sunday 23 March 2025 13:20:49 +0000 (0:00:00.453) 0:00:57.393 ********** 2025-03-23 13:20:49.768169 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d) 2025-03-23 13:20:49.768533 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d) 2025-03-23 13:20:49.769230 | orchestrator | 2025-03-23 13:20:49.769694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:49.771441 | orchestrator | Sunday 23 March 2025 13:20:49 +0000 (0:00:00.492) 0:00:57.886 ********** 2025-03-23 13:20:50.248171 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9) 2025-03-23 13:20:50.248745 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9) 2025-03-23 13:20:50.249559 | orchestrator | 2025-03-23 13:20:50.251107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:50.254970 | orchestrator | Sunday 23 March 2025 13:20:50 +0000 (0:00:00.479) 0:00:58.365 ********** 2025-03-23 13:20:50.690449 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5) 2025-03-23 13:20:50.690928 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5) 2025-03-23 13:20:50.690951 | orchestrator | 2025-03-23 13:20:50.690968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-23 13:20:50.691443 | orchestrator | Sunday 23 March 2025 13:20:50 +0000 (0:00:00.441) 0:00:58.807 ********** 2025-03-23 13:20:51.052159 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-23 13:20:51.739110 | orchestrator | 2025-03-23 13:20:51.739214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:51.739233 | orchestrator | Sunday 23 March 2025 13:20:51 +0000 (0:00:00.362) 0:00:59.170 ********** 2025-03-23 13:20:51.739260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-23 13:20:51.739584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-23 13:20:51.741078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-23 13:20:51.743998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-23 13:20:51.744753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-23 13:20:51.744775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-23 13:20:51.744794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-23 13:20:51.745436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-23 13:20:51.745802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-23 13:20:51.746503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-23 13:20:51.746891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-23 13:20:51.747391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-23 13:20:51.747807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-23 13:20:51.748498 | orchestrator | 2025-03-23 13:20:51.748652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:51.749078 | orchestrator | Sunday 23 March 2025 13:20:51 +0000 (0:00:00.687) 0:00:59.857 ********** 2025-03-23 13:20:51.957114 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:51.957393 | orchestrator | 2025-03-23 13:20:51.957762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:51.958681 | orchestrator | Sunday 23 March 2025 13:20:51 +0000 (0:00:00.219) 0:01:00.076 ********** 2025-03-23 13:20:52.178572 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:52.179799 | orchestrator | 2025-03-23 13:20:52.179827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:52.180765 | orchestrator | Sunday 23 March 2025 13:20:52 +0000 (0:00:00.221) 0:01:00.298 ********** 2025-03-23 13:20:52.404343 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:52.409075 | orchestrator | 2025-03-23 13:20:52.409230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:52.629701 | orchestrator | Sunday 23 March 2025 13:20:52 +0000 (0:00:00.224) 0:01:00.522 ********** 2025-03-23 13:20:52.629801 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:52.630778 | orchestrator | 2025-03-23 13:20:52.630808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:52.631107 | orchestrator | Sunday 23 March 2025 13:20:52 +0000 (0:00:00.225) 0:01:00.748 ********** 2025-03-23 13:20:52.882124 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:52.883268 | orchestrator | 2025-03-23 13:20:52.884540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:52.887104 | orchestrator | Sunday 23 March 2025 13:20:52 +0000 (0:00:00.252) 0:01:01.001 ********** 2025-03-23 13:20:53.102073 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:53.102664 | orchestrator | 2025-03-23 13:20:53.105448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:53.106258 | orchestrator | Sunday 23 March 2025 13:20:53 +0000 (0:00:00.219) 0:01:01.220 ********** 2025-03-23 13:20:53.323782 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:53.324872 | orchestrator | 2025-03-23 13:20:53.325318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:53.327971 | orchestrator | Sunday 23 March 2025 13:20:53 +0000 (0:00:00.222) 0:01:01.442 ********** 2025-03-23 13:20:53.560772 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:53.561368 | orchestrator | 2025-03-23 13:20:53.562901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:53.564051 | orchestrator | Sunday 23 March 2025 13:20:53 +0000 (0:00:00.236) 0:01:01.678 ********** 2025-03-23 13:20:54.476570 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-23 13:20:54.477578 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-23 13:20:54.479499 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-23 13:20:54.479740 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-23 13:20:54.480886 | orchestrator | 2025-03-23 13:20:54.481697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:54.482134 | orchestrator | Sunday 23 March 2025 13:20:54 +0000 (0:00:00.914) 0:01:02.593 ********** 2025-03-23 13:20:54.691832 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:54.692905 | orchestrator | 2025-03-23 13:20:54.693314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:54.693672 | orchestrator | Sunday 23 March 2025 13:20:54 +0000 (0:00:00.207) 0:01:02.800 ********** 2025-03-23 13:20:55.301390 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:55.302332 | orchestrator | 2025-03-23 13:20:55.302926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:55.305010 | orchestrator | Sunday 23 March 2025 13:20:55 +0000 (0:00:00.618) 0:01:03.419 ********** 2025-03-23 13:20:55.520852 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:55.520970 | orchestrator | 2025-03-23 13:20:55.520993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-23 13:20:55.521117 | orchestrator | Sunday 23 March 2025 13:20:55 +0000 (0:00:00.220) 0:01:03.640 ********** 2025-03-23 13:20:55.749669 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:55.752246 | orchestrator | 2025-03-23 13:20:55.884224 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-23 13:20:55.884271 | orchestrator | Sunday 23 March 2025 13:20:55 +0000 (0:00:00.225) 0:01:03.866 ********** 2025-03-23 13:20:55.884317 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:55.884822 | orchestrator | 2025-03-23 13:20:55.885937 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-23 13:20:55.886256 | orchestrator | Sunday 23 March 2025 13:20:55 +0000 (0:00:00.137) 0:01:04.003 ********** 2025-03-23 13:20:56.107368 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9205bfbb-9f4f-501b-85a3-60f418fff160'}}) 2025-03-23 13:20:56.107771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a8506d3-5e74-5dde-8df3-17f522800900'}}) 2025-03-23 13:20:56.108888 | orchestrator | 2025-03-23 13:20:56.109747 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-23 13:20:56.111699 | orchestrator | Sunday 23 March 2025 13:20:56 +0000 (0:00:00.223) 0:01:04.226 ********** 2025-03-23 13:20:58.056103 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'}) 2025-03-23 13:20:58.056272 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'}) 2025-03-23 13:20:58.058329 | orchestrator | 2025-03-23 13:20:58.060120 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-23 13:20:58.060802 | orchestrator | Sunday 23 March 2025 13:20:58 +0000 (0:00:01.946) 0:01:06.172 ********** 2025-03-23 13:20:58.232387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:20:58.232994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:20:58.233855 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:58.235129 | orchestrator | 2025-03-23 13:20:58.235841 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-23 13:20:58.236693 | orchestrator | Sunday 23 March 2025 13:20:58 +0000 (0:00:00.178) 0:01:06.351 ********** 2025-03-23 13:20:59.608056 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'}) 2025-03-23 13:20:59.608186 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'}) 2025-03-23 13:20:59.609171 | orchestrator | 2025-03-23 13:20:59.610009 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-23 13:20:59.611251 | orchestrator | Sunday 23 March 2025 13:20:59 +0000 (0:00:01.372) 0:01:07.723 ********** 2025-03-23 13:20:59.978626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:20:59.979893 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:20:59.981147 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:20:59.982855 | orchestrator | 2025-03-23 13:20:59.984996 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-23 13:20:59.985244 | orchestrator | Sunday 23 March 2025 13:20:59 +0000 (0:00:00.374) 0:01:08.098 ********** 2025-03-23 13:21:00.137429 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.138438 | orchestrator | 2025-03-23 13:21:00.141806 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-23 13:21:00.142478 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.157) 0:01:08.255 ********** 2025-03-23 13:21:00.329352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:00.329707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:00.329749 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.331561 | orchestrator | 2025-03-23 13:21:00.332806 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-23 13:21:00.334799 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.192) 0:01:08.448 ********** 2025-03-23 13:21:00.480439 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.481256 | orchestrator | 2025-03-23 13:21:00.482322 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-23 13:21:00.483150 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.151) 0:01:08.599 ********** 2025-03-23 13:21:00.657178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:00.663615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:00.664560 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.664594 | orchestrator | 2025-03-23 13:21:00.664614 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-23 13:21:00.665048 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.176) 0:01:08.776 ********** 2025-03-23 13:21:00.813454 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.814179 | orchestrator | 2025-03-23 13:21:00.814213 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-23 13:21:00.814235 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.154) 0:01:08.930 ********** 2025-03-23 13:21:00.979440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:00.980668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:00.984115 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:00.984202 | orchestrator | 2025-03-23 13:21:00.984223 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-23 13:21:00.985081 | orchestrator | Sunday 23 March 2025 13:21:00 +0000 (0:00:00.167) 0:01:09.098 ********** 2025-03-23 13:21:01.180089 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:01.180723 | orchestrator | 2025-03-23 13:21:01.181035 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-23 13:21:01.181529 | orchestrator | Sunday 23 March 2025 13:21:01 +0000 (0:00:00.198) 0:01:09.297 ********** 2025-03-23 13:21:01.360706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:01.531720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:01.531789 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:01.531806 | orchestrator | 2025-03-23 13:21:01.531821 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-23 13:21:01.531840 | orchestrator | Sunday 23 March 2025 13:21:01 +0000 (0:00:00.179) 0:01:09.477 ********** 2025-03-23 13:21:01.531866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:01.532814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:01.532852 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:01.535185 | orchestrator | 2025-03-23 13:21:01.535712 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-23 13:21:01.535745 | orchestrator | Sunday 23 March 2025 13:21:01 +0000 (0:00:00.171) 0:01:09.649 ********** 2025-03-23 13:21:01.703254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:01.703825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:01.704438 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:01.704866 | orchestrator | 2025-03-23 13:21:01.705636 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-23 13:21:01.705870 | orchestrator | Sunday 23 March 2025 13:21:01 +0000 (0:00:00.172) 0:01:09.822 ********** 2025-03-23 13:21:01.834008 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:01.834560 | orchestrator | 2025-03-23 13:21:02.207800 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-23 13:21:02.207897 | orchestrator | Sunday 23 March 2025 13:21:01 +0000 (0:00:00.130) 0:01:09.952 ********** 2025-03-23 13:21:02.207935 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:02.208000 | orchestrator | 2025-03-23 13:21:02.208021 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-23 13:21:02.208320 | orchestrator | Sunday 23 March 2025 13:21:02 +0000 (0:00:00.374) 0:01:10.327 ********** 2025-03-23 13:21:02.355834 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:02.356643 | orchestrator | 2025-03-23 13:21:02.357391 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-23 13:21:02.358126 | orchestrator | Sunday 23 March 2025 13:21:02 +0000 (0:00:00.148) 0:01:10.475 ********** 2025-03-23 13:21:02.517620 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:21:02.518265 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-23 13:21:02.519205 | orchestrator | } 2025-03-23 13:21:02.520217 | orchestrator | 2025-03-23 13:21:02.520834 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-23 13:21:02.521797 | orchestrator | Sunday 23 March 2025 13:21:02 +0000 (0:00:00.162) 0:01:10.637 ********** 2025-03-23 13:21:02.669130 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:21:02.669856 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-23 13:21:02.670766 | orchestrator | } 2025-03-23 13:21:02.671018 | orchestrator | 2025-03-23 13:21:02.671950 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-23 13:21:02.674405 | orchestrator | Sunday 23 March 2025 13:21:02 +0000 (0:00:00.151) 0:01:10.789 ********** 2025-03-23 13:21:02.827121 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:21:02.828500 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-23 13:21:02.830095 | orchestrator | } 2025-03-23 13:21:02.833230 | orchestrator | 2025-03-23 13:21:03.414418 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-23 13:21:03.414473 | orchestrator | Sunday 23 March 2025 13:21:02 +0000 (0:00:00.157) 0:01:10.946 ********** 2025-03-23 13:21:03.414499 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:03.415805 | orchestrator | 2025-03-23 13:21:03.415836 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-23 13:21:03.416603 | orchestrator | Sunday 23 March 2025 13:21:03 +0000 (0:00:00.586) 0:01:11.533 ********** 2025-03-23 13:21:03.994320 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:03.994637 | orchestrator | 2025-03-23 13:21:03.995212 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-23 13:21:03.996197 | orchestrator | Sunday 23 March 2025 13:21:03 +0000 (0:00:00.579) 0:01:12.113 ********** 2025-03-23 13:21:04.539253 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:04.540419 | orchestrator | 2025-03-23 13:21:04.541055 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-23 13:21:04.541440 | orchestrator | Sunday 23 March 2025 13:21:04 +0000 (0:00:00.543) 0:01:12.656 ********** 2025-03-23 13:21:04.722867 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:04.723001 | orchestrator | 2025-03-23 13:21:04.723397 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-23 13:21:04.723840 | orchestrator | Sunday 23 March 2025 13:21:04 +0000 (0:00:00.185) 0:01:12.842 ********** 2025-03-23 13:21:04.840204 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:04.840988 | orchestrator | 2025-03-23 13:21:04.842098 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-23 13:21:04.842948 | orchestrator | Sunday 23 March 2025 13:21:04 +0000 (0:00:00.117) 0:01:12.960 ********** 2025-03-23 13:21:04.959034 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:04.959345 | orchestrator | 2025-03-23 13:21:04.959377 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-23 13:21:04.959661 | orchestrator | Sunday 23 March 2025 13:21:04 +0000 (0:00:00.117) 0:01:13.077 ********** 2025-03-23 13:21:05.326886 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:21:05.327437 | orchestrator |  "vgs_report": { 2025-03-23 13:21:05.327922 | orchestrator |  "vg": [] 2025-03-23 13:21:05.328929 | orchestrator |  } 2025-03-23 13:21:05.329790 | orchestrator | } 2025-03-23 13:21:05.330814 | orchestrator | 2025-03-23 13:21:05.331120 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-23 13:21:05.331811 | orchestrator | Sunday 23 March 2025 13:21:05 +0000 (0:00:00.366) 0:01:13.444 ********** 2025-03-23 13:21:05.475024 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:05.475174 | orchestrator | 2025-03-23 13:21:05.475594 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-23 13:21:05.476070 | orchestrator | Sunday 23 March 2025 13:21:05 +0000 (0:00:00.150) 0:01:13.594 ********** 2025-03-23 13:21:05.637946 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:05.638142 | orchestrator | 2025-03-23 13:21:05.638888 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-23 13:21:05.639993 | orchestrator | Sunday 23 March 2025 13:21:05 +0000 (0:00:00.161) 0:01:13.756 ********** 2025-03-23 13:21:05.772688 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:05.773412 | orchestrator | 2025-03-23 13:21:05.775150 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-23 13:21:05.775793 | orchestrator | Sunday 23 March 2025 13:21:05 +0000 (0:00:00.135) 0:01:13.892 ********** 2025-03-23 13:21:05.975504 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:05.976225 | orchestrator | 2025-03-23 13:21:05.976261 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-23 13:21:05.977611 | orchestrator | Sunday 23 March 2025 13:21:05 +0000 (0:00:00.202) 0:01:14.095 ********** 2025-03-23 13:21:06.144356 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.144899 | orchestrator | 2025-03-23 13:21:06.145598 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-23 13:21:06.146900 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.167) 0:01:14.262 ********** 2025-03-23 13:21:06.277233 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.277417 | orchestrator | 2025-03-23 13:21:06.279942 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-23 13:21:06.280621 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.132) 0:01:14.394 ********** 2025-03-23 13:21:06.434372 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.434893 | orchestrator | 2025-03-23 13:21:06.435434 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-23 13:21:06.436137 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.159) 0:01:14.554 ********** 2025-03-23 13:21:06.586123 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.586532 | orchestrator | 2025-03-23 13:21:06.587550 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-23 13:21:06.588422 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.149) 0:01:14.703 ********** 2025-03-23 13:21:06.741052 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.742427 | orchestrator | 2025-03-23 13:21:06.743137 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-23 13:21:06.743171 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.156) 0:01:14.860 ********** 2025-03-23 13:21:06.897173 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:06.898390 | orchestrator | 2025-03-23 13:21:06.899255 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-23 13:21:06.900199 | orchestrator | Sunday 23 March 2025 13:21:06 +0000 (0:00:00.154) 0:01:15.015 ********** 2025-03-23 13:21:07.044828 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:07.045525 | orchestrator | 2025-03-23 13:21:07.046399 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-23 13:21:07.047372 | orchestrator | Sunday 23 March 2025 13:21:07 +0000 (0:00:00.148) 0:01:15.164 ********** 2025-03-23 13:21:07.443536 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:07.443703 | orchestrator | 2025-03-23 13:21:07.444133 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-23 13:21:07.444698 | orchestrator | Sunday 23 March 2025 13:21:07 +0000 (0:00:00.397) 0:01:15.561 ********** 2025-03-23 13:21:07.601686 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:07.603495 | orchestrator | 2025-03-23 13:21:07.604965 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-23 13:21:07.606271 | orchestrator | Sunday 23 March 2025 13:21:07 +0000 (0:00:00.158) 0:01:15.720 ********** 2025-03-23 13:21:07.763164 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:07.763932 | orchestrator | 2025-03-23 13:21:07.765268 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-23 13:21:07.765538 | orchestrator | Sunday 23 March 2025 13:21:07 +0000 (0:00:00.160) 0:01:15.881 ********** 2025-03-23 13:21:07.965458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:07.968399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:07.974787 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.151761 | orchestrator | 2025-03-23 13:21:08.151837 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-23 13:21:08.151854 | orchestrator | Sunday 23 March 2025 13:21:07 +0000 (0:00:00.201) 0:01:16.082 ********** 2025-03-23 13:21:08.151897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:08.152824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:08.153702 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.154921 | orchestrator | 2025-03-23 13:21:08.157156 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-23 13:21:08.157241 | orchestrator | Sunday 23 March 2025 13:21:08 +0000 (0:00:00.188) 0:01:16.271 ********** 2025-03-23 13:21:08.335652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:08.336578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:08.338102 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.338903 | orchestrator | 2025-03-23 13:21:08.340185 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-23 13:21:08.341444 | orchestrator | Sunday 23 March 2025 13:21:08 +0000 (0:00:00.182) 0:01:16.453 ********** 2025-03-23 13:21:08.514594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:08.515733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:08.517319 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.518740 | orchestrator | 2025-03-23 13:21:08.519961 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-23 13:21:08.520769 | orchestrator | Sunday 23 March 2025 13:21:08 +0000 (0:00:00.180) 0:01:16.634 ********** 2025-03-23 13:21:08.695146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:08.696930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:08.696964 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.697996 | orchestrator | 2025-03-23 13:21:08.699204 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-23 13:21:08.699598 | orchestrator | Sunday 23 March 2025 13:21:08 +0000 (0:00:00.178) 0:01:16.812 ********** 2025-03-23 13:21:08.893652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:08.895133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:08.897188 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:08.898249 | orchestrator | 2025-03-23 13:21:08.899004 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-23 13:21:08.899653 | orchestrator | Sunday 23 March 2025 13:21:08 +0000 (0:00:00.198) 0:01:17.010 ********** 2025-03-23 13:21:09.064175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:09.064577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:09.065539 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:09.066098 | orchestrator | 2025-03-23 13:21:09.066722 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-23 13:21:09.067060 | orchestrator | Sunday 23 March 2025 13:21:09 +0000 (0:00:00.172) 0:01:17.183 ********** 2025-03-23 13:21:09.248513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:09.248650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:09.249918 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:09.251010 | orchestrator | 2025-03-23 13:21:09.251316 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-23 13:21:09.251345 | orchestrator | Sunday 23 March 2025 13:21:09 +0000 (0:00:00.184) 0:01:17.368 ********** 2025-03-23 13:21:10.028458 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:10.029711 | orchestrator | 2025-03-23 13:21:10.030570 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-23 13:21:10.030775 | orchestrator | Sunday 23 March 2025 13:21:10 +0000 (0:00:00.776) 0:01:18.145 ********** 2025-03-23 13:21:10.580399 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:10.580743 | orchestrator | 2025-03-23 13:21:10.583059 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-23 13:21:10.584181 | orchestrator | Sunday 23 March 2025 13:21:10 +0000 (0:00:00.551) 0:01:18.696 ********** 2025-03-23 13:21:10.734574 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:10.734916 | orchestrator | 2025-03-23 13:21:10.736638 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-23 13:21:10.737303 | orchestrator | Sunday 23 March 2025 13:21:10 +0000 (0:00:00.157) 0:01:18.854 ********** 2025-03-23 13:21:10.959393 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'vg_name': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'}) 2025-03-23 13:21:10.959594 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'vg_name': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'}) 2025-03-23 13:21:10.960063 | orchestrator | 2025-03-23 13:21:10.960708 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-23 13:21:10.960857 | orchestrator | Sunday 23 March 2025 13:21:10 +0000 (0:00:00.223) 0:01:19.077 ********** 2025-03-23 13:21:11.150733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:11.151803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:11.152409 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:11.155459 | orchestrator | 2025-03-23 13:21:11.156266 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-23 13:21:11.156322 | orchestrator | Sunday 23 March 2025 13:21:11 +0000 (0:00:00.192) 0:01:19.270 ********** 2025-03-23 13:21:11.355517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:11.356371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:11.357270 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:11.359573 | orchestrator | 2025-03-23 13:21:11.359650 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-23 13:21:11.360322 | orchestrator | Sunday 23 March 2025 13:21:11 +0000 (0:00:00.204) 0:01:19.474 ********** 2025-03-23 13:21:11.533748 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'})  2025-03-23 13:21:11.534652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'})  2025-03-23 13:21:11.535502 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:11.536638 | orchestrator | 2025-03-23 13:21:11.537411 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-23 13:21:11.538077 | orchestrator | Sunday 23 March 2025 13:21:11 +0000 (0:00:00.178) 0:01:19.652 ********** 2025-03-23 13:21:12.173067 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:21:12.173805 | orchestrator |  "lvm_report": { 2025-03-23 13:21:12.175582 | orchestrator |  "lv": [ 2025-03-23 13:21:12.176484 | orchestrator |  { 2025-03-23 13:21:12.177664 | orchestrator |  "lv_name": "osd-block-5a8506d3-5e74-5dde-8df3-17f522800900", 2025-03-23 13:21:12.179033 | orchestrator |  "vg_name": "ceph-5a8506d3-5e74-5dde-8df3-17f522800900" 2025-03-23 13:21:12.180109 | orchestrator |  }, 2025-03-23 13:21:12.180723 | orchestrator |  { 2025-03-23 13:21:12.181906 | orchestrator |  "lv_name": "osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160", 2025-03-23 13:21:12.184506 | orchestrator |  "vg_name": "ceph-9205bfbb-9f4f-501b-85a3-60f418fff160" 2025-03-23 13:21:12.185421 | orchestrator |  } 2025-03-23 13:21:12.186152 | orchestrator |  ], 2025-03-23 13:21:12.187263 | orchestrator |  "pv": [ 2025-03-23 13:21:12.187677 | orchestrator |  { 2025-03-23 13:21:12.188375 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-23 13:21:12.189220 | orchestrator |  "vg_name": "ceph-9205bfbb-9f4f-501b-85a3-60f418fff160" 2025-03-23 13:21:12.189797 | orchestrator |  }, 2025-03-23 13:21:12.190532 | orchestrator |  { 2025-03-23 13:21:12.191808 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-23 13:21:12.192731 | orchestrator |  "vg_name": "ceph-5a8506d3-5e74-5dde-8df3-17f522800900" 2025-03-23 13:21:12.193307 | orchestrator |  } 2025-03-23 13:21:12.193868 | orchestrator |  ] 2025-03-23 13:21:12.194436 | orchestrator |  } 2025-03-23 13:21:12.194956 | orchestrator | } 2025-03-23 13:21:12.195345 | orchestrator | 2025-03-23 13:21:12.196578 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:21:12.196620 | orchestrator | 2025-03-23 13:21:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:21:12.196780 | orchestrator | 2025-03-23 13:21:12 | INFO  | Please wait and do not abort execution. 2025-03-23 13:21:12.196808 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-23 13:21:12.198397 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-23 13:21:12.198859 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-23 13:21:12.199898 | orchestrator | 2025-03-23 13:21:12.200694 | orchestrator | 2025-03-23 13:21:12.201412 | orchestrator | 2025-03-23 13:21:12.201819 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:21:12.202199 | orchestrator | Sunday 23 March 2025 13:21:12 +0000 (0:00:00.639) 0:01:20.291 ********** 2025-03-23 13:21:12.202587 | orchestrator | =============================================================================== 2025-03-23 13:21:12.203147 | orchestrator | Create block VGs -------------------------------------------------------- 5.99s 2025-03-23 13:21:12.203441 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2025-03-23 13:21:12.204381 | orchestrator | Print LVM report data --------------------------------------------------- 2.30s 2025-03-23 13:21:12.205037 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.17s 2025-03-23 13:21:12.206531 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.92s 2025-03-23 13:21:12.207964 | orchestrator | Add known links to the list of available block devices ------------------ 1.75s 2025-03-23 13:21:12.208518 | orchestrator | Add known partitions to the list of available block devices ------------- 1.69s 2025-03-23 13:21:12.209677 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.69s 2025-03-23 13:21:12.210260 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.68s 2025-03-23 13:21:12.211258 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-03-23 13:21:12.211545 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.19s 2025-03-23 13:21:12.212169 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-03-23 13:21:12.212765 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2025-03-23 13:21:12.213462 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-03-23 13:21:12.214355 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.76s 2025-03-23 13:21:12.214735 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-03-23 13:21:12.215572 | orchestrator | Fail if size of DB+WAL LVs on ceph_db_wal_devices > available ----------- 0.73s 2025-03-23 13:21:12.215665 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.72s 2025-03-23 13:21:12.216527 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.71s 2025-03-23 13:21:12.217722 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.71s 2025-03-23 13:21:14.267354 | orchestrator | 2025-03-23 13:21:14 | INFO  | Task 435ecbe5-5d47-433b-9daf-121f6a7f8949 (facts) was prepared for execution. 2025-03-23 13:21:17.662344 | orchestrator | 2025-03-23 13:21:14 | INFO  | It takes a moment until task 435ecbe5-5d47-433b-9daf-121f6a7f8949 (facts) has been started and output is visible here. 2025-03-23 13:21:17.662484 | orchestrator | 2025-03-23 13:21:17.663079 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-23 13:21:17.663983 | orchestrator | 2025-03-23 13:21:17.667127 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-23 13:21:18.741209 | orchestrator | Sunday 23 March 2025 13:21:17 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-03-23 13:21:18.741388 | orchestrator | ok: [testbed-manager] 2025-03-23 13:21:18.742866 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:21:18.744902 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:21:18.748439 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:21:18.748669 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:21:18.752254 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:21:18.753381 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:18.753978 | orchestrator | 2025-03-23 13:21:18.754436 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-23 13:21:18.754850 | orchestrator | Sunday 23 March 2025 13:21:18 +0000 (0:00:01.076) 0:00:01.289 ********** 2025-03-23 13:21:18.913540 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:21:19.000547 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:21:19.084732 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:21:19.180497 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:21:19.294106 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:21:20.045643 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:21:20.046181 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:20.049429 | orchestrator | 2025-03-23 13:21:20.050540 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-23 13:21:20.051895 | orchestrator | 2025-03-23 13:21:20.052849 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-23 13:21:20.055752 | orchestrator | Sunday 23 March 2025 13:21:20 +0000 (0:00:01.308) 0:00:02.598 ********** 2025-03-23 13:21:24.736844 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:21:24.737487 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:21:24.737527 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:21:24.738950 | orchestrator | ok: [testbed-manager] 2025-03-23 13:21:24.739168 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:21:24.739847 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:21:24.740423 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:21:24.740872 | orchestrator | 2025-03-23 13:21:24.742008 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-23 13:21:24.743200 | orchestrator | 2025-03-23 13:21:24.744627 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-23 13:21:24.745195 | orchestrator | Sunday 23 March 2025 13:21:24 +0000 (0:00:04.692) 0:00:07.290 ********** 2025-03-23 13:21:25.108471 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:21:25.201575 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:21:25.285089 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:21:25.368267 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:21:25.446513 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:21:25.497132 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:21:25.497489 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:21:25.499216 | orchestrator | 2025-03-23 13:21:25.500577 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:21:25.500974 | orchestrator | 2025-03-23 13:21:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-23 13:21:25.501469 | orchestrator | 2025-03-23 13:21:25 | INFO  | Please wait and do not abort execution. 2025-03-23 13:21:25.502993 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.503788 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.507099 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.511570 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.511642 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.512964 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.513499 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:21:25.514272 | orchestrator | 2025-03-23 13:21:25.514980 | orchestrator | Sunday 23 March 2025 13:21:25 +0000 (0:00:00.761) 0:00:08.052 ********** 2025-03-23 13:21:25.515938 | orchestrator | =============================================================================== 2025-03-23 13:21:25.516775 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2025-03-23 13:21:25.517143 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-03-23 13:21:25.517630 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-03-23 13:21:25.518470 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2025-03-23 13:21:26.123898 | orchestrator | 2025-03-23 13:21:26.127982 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 23 13:21:26 UTC 2025 2025-03-23 13:21:27.568585 | orchestrator | 2025-03-23 13:21:27.568709 | orchestrator | 2025-03-23 13:21:27 | INFO  | Collection nutshell is prepared for execution 2025-03-23 13:21:27.573281 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [0] - dotfiles 2025-03-23 13:21:27.573378 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [0] - homer 2025-03-23 13:21:27.575383 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [0] - netdata 2025-03-23 13:21:27.575411 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [0] - openstackclient 2025-03-23 13:21:27.575426 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [0] - phpmyadmin 2025-03-23 13:21:27.575449 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [0] - common 2025-03-23 13:21:27.575473 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [1] -- loadbalancer 2025-03-23 13:21:27.575578 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [2] --- opensearch 2025-03-23 13:21:27.575913 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [2] --- mariadb-ng 2025-03-23 13:21:27.575936 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [3] ---- horizon 2025-03-23 13:21:27.575955 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [3] ---- keystone 2025-03-23 13:21:27.576804 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [4] ----- neutron 2025-03-23 13:21:27.576835 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ wait-for-nova 2025-03-23 13:21:27.576851 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [5] ------ octavia 2025-03-23 13:21:27.576872 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- barbican 2025-03-23 13:21:27.577059 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- designate 2025-03-23 13:21:27.577081 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- ironic 2025-03-23 13:21:27.577097 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- placement 2025-03-23 13:21:27.577117 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- magnum 2025-03-23 13:21:27.577279 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [1] -- openvswitch 2025-03-23 13:21:27.577327 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [2] --- ovn 2025-03-23 13:21:27.577349 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [1] -- memcached 2025-03-23 13:21:27.577416 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [1] -- redis 2025-03-23 13:21:27.577435 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [1] -- rabbitmq-ng 2025-03-23 13:21:27.577461 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [0] - kubernetes 2025-03-23 13:21:27.577523 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [1] -- kubeconfig 2025-03-23 13:21:27.577640 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [1] -- copy-kubeconfig 2025-03-23 13:21:27.577665 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [0] - ceph 2025-03-23 13:21:27.578956 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [1] -- ceph-pools 2025-03-23 13:21:27.579036 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [2] --- copy-ceph-keys 2025-03-23 13:21:27.579141 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [3] ---- cephclient 2025-03-23 13:21:27.579161 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-23 13:21:27.579181 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [4] ----- wait-for-keystone 2025-03-23 13:21:27.579367 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-23 13:21:27.579391 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ glance 2025-03-23 13:21:27.579406 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ cinder 2025-03-23 13:21:27.579426 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ nova 2025-03-23 13:21:27.579524 | orchestrator | 2025-03-23 13:21:27 | INFO  | A [4] ----- prometheus 2025-03-23 13:21:27.724976 | orchestrator | 2025-03-23 13:21:27 | INFO  | D [5] ------ grafana 2025-03-23 13:21:27.725077 | orchestrator | 2025-03-23 13:21:27 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-23 13:21:27.725222 | orchestrator | 2025-03-23 13:21:27 | INFO  | Tasks are running in the background 2025-03-23 13:21:29.590508 | orchestrator | 2025-03-23 13:21:29 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-23 13:21:31.699733 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:31.699923 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:31.703590 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:31.704162 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:31.705053 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:31.705687 | orchestrator | 2025-03-23 13:21:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:34.734714 | orchestrator | 2025-03-23 13:21:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:34.734843 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:34.735388 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:34.736642 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:34.737025 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:34.738735 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:34.739798 | orchestrator | 2025-03-23 13:21:34 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:37.829413 | orchestrator | 2025-03-23 13:21:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:37.829534 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:37.833176 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:37.838155 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:37.838192 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:37.838230 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:37.841728 | orchestrator | 2025-03-23 13:21:37 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:40.901055 | orchestrator | 2025-03-23 13:21:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:40.901213 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:40.909187 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:40.911266 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:40.911296 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:40.918727 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:40.922758 | orchestrator | 2025-03-23 13:21:40 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:43.986577 | orchestrator | 2025-03-23 13:21:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:43.986696 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:47.062927 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:47.063028 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:47.063044 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:47.063057 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:47.063070 | orchestrator | 2025-03-23 13:21:43 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:47.063084 | orchestrator | 2025-03-23 13:21:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:47.063112 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:47.067793 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:47.069977 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state STARTED 2025-03-23 13:21:47.072689 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:47.073153 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:47.077809 | orchestrator | 2025-03-23 13:21:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:50.149399 | orchestrator | 2025-03-23 13:21:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:50.149532 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:50.149897 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:50.149995 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task c4dfb14e-3f40-4e93-80c4-c5805f3bacec is in state SUCCESS 2025-03-23 13:21:50.150995 | orchestrator | 2025-03-23 13:21:50.151062 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-23 13:21:50.151097 | orchestrator | 2025-03-23 13:21:50.151114 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-23 13:21:50.151128 | orchestrator | Sunday 23 March 2025 13:21:35 +0000 (0:00:00.488) 0:00:00.488 ********** 2025-03-23 13:21:50.151142 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:21:50.151158 | orchestrator | changed: [testbed-manager] 2025-03-23 13:21:50.151192 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:21:50.151207 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:21:50.151221 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:21:50.151235 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:21:50.151249 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:21:50.151263 | orchestrator | 2025-03-23 13:21:50.151277 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-23 13:21:50.151297 | orchestrator | Sunday 23 March 2025 13:21:39 +0000 (0:00:04.174) 0:00:04.662 ********** 2025-03-23 13:21:50.151344 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-23 13:21:50.151359 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-23 13:21:50.151378 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-23 13:21:50.151392 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-23 13:21:50.151406 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-23 13:21:50.151419 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-23 13:21:50.151433 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-23 13:21:50.151447 | orchestrator | 2025-03-23 13:21:50.151460 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-23 13:21:50.151475 | orchestrator | Sunday 23 March 2025 13:21:41 +0000 (0:00:01.903) 0:00:06.566 ********** 2025-03-23 13:21:50.151492 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.252855', 'end': '2025-03-23 13:21:40.263132', 'delta': '0:00:00.010277', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151515 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.242294', 'end': '2025-03-23 13:21:40.247634', 'delta': '0:00:00.005340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151531 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.254053', 'end': '2025-03-23 13:21:40.263349', 'delta': '0:00:00.009296', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151573 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.526382', 'end': '2025-03-23 13:21:40.538306', 'delta': '0:00:00.011924', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151598 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.592164', 'end': '2025-03-23 13:21:40.600504', 'delta': '0:00:00.008340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151615 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.756461', 'end': '2025-03-23 13:21:40.763352', 'delta': '0:00:00.006891', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151635 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-23 13:21:40.978390', 'end': '2025-03-23 13:21:40.986828', 'delta': '0:00:00.008438', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-23 13:21:50.151667 | orchestrator | 2025-03-23 13:21:50.151683 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-23 13:21:50.151699 | orchestrator | Sunday 23 March 2025 13:21:44 +0000 (0:00:02.856) 0:00:09.423 ********** 2025-03-23 13:21:50.151714 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-23 13:21:50.151730 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-23 13:21:50.151745 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-23 13:21:50.151760 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-23 13:21:50.151775 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-23 13:21:50.151791 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-23 13:21:50.151813 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-23 13:21:50.151829 | orchestrator | 2025-03-23 13:21:50.151844 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:21:50.151860 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.151877 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.151893 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.151917 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.153605 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.153679 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.153697 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:21:50.153712 | orchestrator | 2025-03-23 13:21:50.153727 | orchestrator | Sunday 23 March 2025 13:21:48 +0000 (0:00:04.069) 0:00:13.492 ********** 2025-03-23 13:21:50.153742 | orchestrator | =============================================================================== 2025-03-23 13:21:50.153756 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.17s 2025-03-23 13:21:50.153770 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.07s 2025-03-23 13:21:50.153784 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.86s 2025-03-23 13:21:50.153799 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.90s 2025-03-23 13:21:50.153813 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:50.153827 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:50.153865 | orchestrator | 2025-03-23 13:21:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:50.153936 | orchestrator | 2025-03-23 13:21:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:53.257607 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:53.259862 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:53.259902 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:53.260117 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:53.260142 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:53.260162 | orchestrator | 2025-03-23 13:21:53 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:21:53.262190 | orchestrator | 2025-03-23 13:21:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:56.403970 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:56.405358 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:56.405426 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:56.405449 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:56.414119 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:59.523435 | orchestrator | 2025-03-23 13:21:56 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:21:59.523510 | orchestrator | 2025-03-23 13:21:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:21:59.523539 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:21:59.526463 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:21:59.529744 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:21:59.533474 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:21:59.538295 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:21:59.543172 | orchestrator | 2025-03-23 13:21:59 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:02.619675 | orchestrator | 2025-03-23 13:21:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:02.619798 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:02.620622 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:02.620652 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:22:02.620688 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:02.624418 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:02.625551 | orchestrator | 2025-03-23 13:22:02 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:05.705403 | orchestrator | 2025-03-23 13:22:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:05.705534 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:05.712391 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:05.712424 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:22:05.712446 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:05.715546 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:05.715578 | orchestrator | 2025-03-23 13:22:05 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:08.779256 | orchestrator | 2025-03-23 13:22:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:08.779373 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:08.785197 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:08.785229 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:22:08.786339 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:08.786364 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:08.786381 | orchestrator | 2025-03-23 13:22:08 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:11.870401 | orchestrator | 2025-03-23 13:22:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:11.870531 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:15.077581 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:15.077687 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state STARTED 2025-03-23 13:22:15.077704 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:15.077720 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:15.077734 | orchestrator | 2025-03-23 13:22:11 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:15.077749 | orchestrator | 2025-03-23 13:22:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:15.077780 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:15.078718 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:15.078748 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task 747f8b24-f0d4-4685-9349-c0f9449981da is in state SUCCESS 2025-03-23 13:22:15.078770 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:15.082597 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:15.089864 | orchestrator | 2025-03-23 13:22:15 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:18.174659 | orchestrator | 2025-03-23 13:22:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:18.174780 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:18.181055 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:18.183424 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:18.191194 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:18.195867 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:18.200806 | orchestrator | 2025-03-23 13:22:18 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:21.312567 | orchestrator | 2025-03-23 13:22:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:21.313296 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:21.319960 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:21.320000 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:21.320050 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:24.354269 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:24.354405 | orchestrator | 2025-03-23 13:22:21 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:24.354424 | orchestrator | 2025-03-23 13:22:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:24.354455 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:24.354935 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:24.354969 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:24.358085 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:24.361797 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:27.457471 | orchestrator | 2025-03-23 13:22:24 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:27.457576 | orchestrator | 2025-03-23 13:22:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:27.457610 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:30.562771 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:30.562870 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:30.562887 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state STARTED 2025-03-23 13:22:30.562902 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:30.562916 | orchestrator | 2025-03-23 13:22:27 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:30.562931 | orchestrator | 2025-03-23 13:22:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:30.562963 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:30.563074 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:30.563095 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:30.563110 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task 5fc88d2e-59df-4454-a7ce-8b16a0a79928 is in state SUCCESS 2025-03-23 13:22:30.563146 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:30.565788 | orchestrator | 2025-03-23 13:22:30 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:33.611534 | orchestrator | 2025-03-23 13:22:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:33.611674 | orchestrator | 2025-03-23 13:22:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:33.613130 | orchestrator | 2025-03-23 13:22:33 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:33.613401 | orchestrator | 2025-03-23 13:22:33 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:33.613464 | orchestrator | 2025-03-23 13:22:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:33.614432 | orchestrator | 2025-03-23 13:22:33 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:36.692944 | orchestrator | 2025-03-23 13:22:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:36.693060 | orchestrator | 2025-03-23 13:22:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:36.693716 | orchestrator | 2025-03-23 13:22:36 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:36.695227 | orchestrator | 2025-03-23 13:22:36 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:36.699247 | orchestrator | 2025-03-23 13:22:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:39.757911 | orchestrator | 2025-03-23 13:22:36 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:39.758062 | orchestrator | 2025-03-23 13:22:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:39.758098 | orchestrator | 2025-03-23 13:22:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:39.758218 | orchestrator | 2025-03-23 13:22:39 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:39.758890 | orchestrator | 2025-03-23 13:22:39 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:39.761824 | orchestrator | 2025-03-23 13:22:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:39.765999 | orchestrator | 2025-03-23 13:22:39 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:42.797751 | orchestrator | 2025-03-23 13:22:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:42.797862 | orchestrator | 2025-03-23 13:22:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:42.799131 | orchestrator | 2025-03-23 13:22:42 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:42.799728 | orchestrator | 2025-03-23 13:22:42 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:42.800810 | orchestrator | 2025-03-23 13:22:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:45.877172 | orchestrator | 2025-03-23 13:22:42 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:45.877278 | orchestrator | 2025-03-23 13:22:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:45.877313 | orchestrator | 2025-03-23 13:22:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:45.878602 | orchestrator | 2025-03-23 13:22:45 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:45.886569 | orchestrator | 2025-03-23 13:22:45 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:45.888243 | orchestrator | 2025-03-23 13:22:45 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:45.888281 | orchestrator | 2025-03-23 13:22:45 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:45.888374 | orchestrator | 2025-03-23 13:22:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:48.956254 | orchestrator | 2025-03-23 13:22:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:48.961726 | orchestrator | 2025-03-23 13:22:48 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:48.970614 | orchestrator | 2025-03-23 13:22:48 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:48.976400 | orchestrator | 2025-03-23 13:22:48 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:48.981204 | orchestrator | 2025-03-23 13:22:48 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:52.106232 | orchestrator | 2025-03-23 13:22:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:52.106348 | orchestrator | 2025-03-23 13:22:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:52.116420 | orchestrator | 2025-03-23 13:22:52 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:52.116461 | orchestrator | 2025-03-23 13:22:52 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:55.180517 | orchestrator | 2025-03-23 13:22:52 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:55.180626 | orchestrator | 2025-03-23 13:22:52 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:55.180644 | orchestrator | 2025-03-23 13:22:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:55.180678 | orchestrator | 2025-03-23 13:22:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:55.190247 | orchestrator | 2025-03-23 13:22:55 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state STARTED 2025-03-23 13:22:55.190719 | orchestrator | 2025-03-23 13:22:55 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:55.190749 | orchestrator | 2025-03-23 13:22:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:22:55.190771 | orchestrator | 2025-03-23 13:22:55 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state STARTED 2025-03-23 13:22:55.191746 | orchestrator | 2025-03-23 13:22:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:22:58.279232 | orchestrator | 2025-03-23 13:22:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:22:58.279710 | orchestrator | 2025-03-23 13:22:58 | INFO  | Task e76a9cdf-9cab-4b53-86ac-31e3502d39b0 is in state SUCCESS 2025-03-23 13:22:58.281185 | orchestrator | 2025-03-23 13:22:58.281235 | orchestrator | 2025-03-23 13:22:58.281251 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-23 13:22:58.281267 | orchestrator | 2025-03-23 13:22:58.281281 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-23 13:22:58.281295 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:00.292) 0:00:00.292 ********** 2025-03-23 13:22:58.281309 | orchestrator | ok: [testbed-manager] => { 2025-03-23 13:22:58.281326 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-23 13:22:58.281341 | orchestrator | } 2025-03-23 13:22:58.281403 | orchestrator | 2025-03-23 13:22:58.281421 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-23 13:22:58.281437 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:00.160) 0:00:00.452 ********** 2025-03-23 13:22:58.281453 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.281470 | orchestrator | 2025-03-23 13:22:58.281485 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-23 13:22:58.281500 | orchestrator | Sunday 23 March 2025 13:21:38 +0000 (0:00:01.498) 0:00:01.950 ********** 2025-03-23 13:22:58.281515 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-23 13:22:58.281550 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-23 13:22:58.281565 | orchestrator | 2025-03-23 13:22:58.281581 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-23 13:22:58.281596 | orchestrator | Sunday 23 March 2025 13:21:39 +0000 (0:00:01.303) 0:00:03.254 ********** 2025-03-23 13:22:58.281611 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.281626 | orchestrator | 2025-03-23 13:22:58.281641 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-23 13:22:58.281657 | orchestrator | Sunday 23 March 2025 13:21:42 +0000 (0:00:03.262) 0:00:06.516 ********** 2025-03-23 13:22:58.281672 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.281687 | orchestrator | 2025-03-23 13:22:58.281702 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-23 13:22:58.281717 | orchestrator | Sunday 23 March 2025 13:21:44 +0000 (0:00:01.495) 0:00:08.011 ********** 2025-03-23 13:22:58.281732 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-23 13:22:58.281747 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.281763 | orchestrator | 2025-03-23 13:22:58.281778 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-23 13:22:58.281793 | orchestrator | Sunday 23 March 2025 13:22:11 +0000 (0:00:27.297) 0:00:35.309 ********** 2025-03-23 13:22:58.281808 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.281823 | orchestrator | 2025-03-23 13:22:58.281838 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:22:58.281853 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.281870 | orchestrator | 2025-03-23 13:22:58.281886 | orchestrator | Sunday 23 March 2025 13:22:14 +0000 (0:00:02.914) 0:00:38.223 ********** 2025-03-23 13:22:58.281901 | orchestrator | =============================================================================== 2025-03-23 13:22:58.281916 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.29s 2025-03-23 13:22:58.281931 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.26s 2025-03-23 13:22:58.281946 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.91s 2025-03-23 13:22:58.281967 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.50s 2025-03-23 13:22:58.281983 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.50s 2025-03-23 13:22:58.281998 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.30s 2025-03-23 13:22:58.282013 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.16s 2025-03-23 13:22:58.282088 | orchestrator | 2025-03-23 13:22:58.282102 | orchestrator | 2025-03-23 13:22:58.282117 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-23 13:22:58.282131 | orchestrator | 2025-03-23 13:22:58.282145 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-23 13:22:58.282159 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:00.881) 0:00:00.881 ********** 2025-03-23 13:22:58.282174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-23 13:22:58.282189 | orchestrator | 2025-03-23 13:22:58.282203 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-23 13:22:58.282217 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:00.439) 0:00:01.320 ********** 2025-03-23 13:22:58.282230 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-23 13:22:58.282244 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-23 13:22:58.282258 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-23 13:22:58.282273 | orchestrator | 2025-03-23 13:22:58.282295 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-23 13:22:58.282310 | orchestrator | Sunday 23 March 2025 13:21:38 +0000 (0:00:01.607) 0:00:02.928 ********** 2025-03-23 13:22:58.282349 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.282364 | orchestrator | 2025-03-23 13:22:58.282398 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-23 13:22:58.282412 | orchestrator | Sunday 23 March 2025 13:21:39 +0000 (0:00:01.568) 0:00:04.497 ********** 2025-03-23 13:22:58.282426 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-23 13:22:58.282441 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.282455 | orchestrator | 2025-03-23 13:22:58.282481 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-23 13:22:58.282496 | orchestrator | Sunday 23 March 2025 13:22:17 +0000 (0:00:37.687) 0:00:42.185 ********** 2025-03-23 13:22:58.282511 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.282525 | orchestrator | 2025-03-23 13:22:58.282539 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-23 13:22:58.282553 | orchestrator | Sunday 23 March 2025 13:22:20 +0000 (0:00:02.496) 0:00:44.681 ********** 2025-03-23 13:22:58.282567 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.282580 | orchestrator | 2025-03-23 13:22:58.282594 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-23 13:22:58.282608 | orchestrator | Sunday 23 March 2025 13:22:21 +0000 (0:00:01.212) 0:00:45.893 ********** 2025-03-23 13:22:58.282622 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.282636 | orchestrator | 2025-03-23 13:22:58.282650 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-23 13:22:58.282664 | orchestrator | Sunday 23 March 2025 13:22:23 +0000 (0:00:02.272) 0:00:48.166 ********** 2025-03-23 13:22:58.282678 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.282692 | orchestrator | 2025-03-23 13:22:58.282706 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-23 13:22:58.282720 | orchestrator | Sunday 23 March 2025 13:22:24 +0000 (0:00:01.248) 0:00:49.414 ********** 2025-03-23 13:22:58.282734 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.282748 | orchestrator | 2025-03-23 13:22:58.282761 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-23 13:22:58.282775 | orchestrator | Sunday 23 March 2025 13:22:26 +0000 (0:00:01.429) 0:00:50.844 ********** 2025-03-23 13:22:58.282789 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.282803 | orchestrator | 2025-03-23 13:22:58.282817 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:22:58.282831 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.282846 | orchestrator | 2025-03-23 13:22:58.282860 | orchestrator | Sunday 23 March 2025 13:22:26 +0000 (0:00:00.643) 0:00:51.488 ********** 2025-03-23 13:22:58.282873 | orchestrator | =============================================================================== 2025-03-23 13:22:58.282887 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.69s 2025-03-23 13:22:58.282901 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.50s 2025-03-23 13:22:58.282916 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.27s 2025-03-23 13:22:58.282935 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.61s 2025-03-23 13:22:58.282950 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.57s 2025-03-23 13:22:58.282964 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.43s 2025-03-23 13:22:58.282977 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.25s 2025-03-23 13:22:58.282991 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.21s 2025-03-23 13:22:58.283005 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.64s 2025-03-23 13:22:58.283039 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2025-03-23 13:22:58.283054 | orchestrator | 2025-03-23 13:22:58.283069 | orchestrator | 2025-03-23 13:22:58.283083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:22:58.283097 | orchestrator | 2025-03-23 13:22:58.283111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:22:58.283125 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:00.742) 0:00:00.742 ********** 2025-03-23 13:22:58.283139 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-23 13:22:58.283153 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-23 13:22:58.283167 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-23 13:22:58.283181 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-23 13:22:58.283196 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-23 13:22:58.283210 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-23 13:22:58.283224 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-23 13:22:58.283238 | orchestrator | 2025-03-23 13:22:58.283252 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-23 13:22:58.283266 | orchestrator | 2025-03-23 13:22:58.283280 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-23 13:22:58.283294 | orchestrator | Sunday 23 March 2025 13:21:37 +0000 (0:00:01.800) 0:00:02.543 ********** 2025-03-23 13:22:58.283322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:22:58.283338 | orchestrator | 2025-03-23 13:22:58.283353 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-23 13:22:58.283385 | orchestrator | Sunday 23 March 2025 13:21:40 +0000 (0:00:02.831) 0:00:05.375 ********** 2025-03-23 13:22:58.283400 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:22:58.283415 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:22:58.283429 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:22:58.283443 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:22:58.283457 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.283470 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:22:58.283484 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:22:58.283498 | orchestrator | 2025-03-23 13:22:58.283512 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-23 13:22:58.283533 | orchestrator | Sunday 23 March 2025 13:21:43 +0000 (0:00:02.771) 0:00:08.146 ********** 2025-03-23 13:22:58.283548 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:22:58.283562 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.283576 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:22:58.283590 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:22:58.283603 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:22:58.283617 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:22:58.283631 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:22:58.283650 | orchestrator | 2025-03-23 13:22:58.283664 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-23 13:22:58.283678 | orchestrator | Sunday 23 March 2025 13:21:48 +0000 (0:00:04.797) 0:00:12.943 ********** 2025-03-23 13:22:58.283692 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.283706 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:22:58.283720 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:22:58.283734 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:22:58.283748 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:22:58.283761 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:22:58.283775 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:22:58.283789 | orchestrator | 2025-03-23 13:22:58.283803 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-23 13:22:58.283827 | orchestrator | Sunday 23 March 2025 13:21:51 +0000 (0:00:03.282) 0:00:16.226 ********** 2025-03-23 13:22:58.283841 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:22:58.283855 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:22:58.283869 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.283882 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:22:58.283896 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:22:58.283910 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:22:58.283923 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:22:58.283937 | orchestrator | 2025-03-23 13:22:58.283951 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-23 13:22:58.283965 | orchestrator | Sunday 23 March 2025 13:22:01 +0000 (0:00:10.180) 0:00:26.406 ********** 2025-03-23 13:22:58.283978 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:22:58.283992 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:22:58.284006 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:22:58.284019 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:22:58.284033 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:22:58.284047 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:22:58.284060 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.284074 | orchestrator | 2025-03-23 13:22:58.284088 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-23 13:22:58.284102 | orchestrator | Sunday 23 March 2025 13:22:21 +0000 (0:00:19.684) 0:00:46.091 ********** 2025-03-23 13:22:58.284117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:22:58.284135 | orchestrator | 2025-03-23 13:22:58.284149 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-23 13:22:58.284163 | orchestrator | Sunday 23 March 2025 13:22:23 +0000 (0:00:02.317) 0:00:48.408 ********** 2025-03-23 13:22:58.284177 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-23 13:22:58.284191 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-23 13:22:58.284204 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-23 13:22:58.284218 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-23 13:22:58.284232 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-23 13:22:58.284245 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-23 13:22:58.284259 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-23 13:22:58.284273 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-23 13:22:58.284287 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-23 13:22:58.284300 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-23 13:22:58.284314 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-23 13:22:58.284328 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-23 13:22:58.284341 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-23 13:22:58.284355 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-23 13:22:58.284397 | orchestrator | 2025-03-23 13:22:58.284413 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-23 13:22:58.284427 | orchestrator | Sunday 23 March 2025 13:22:32 +0000 (0:00:08.154) 0:00:56.563 ********** 2025-03-23 13:22:58.284441 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.284455 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:22:58.284469 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:22:58.284483 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:22:58.284497 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:22:58.284510 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:22:58.284524 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:22:58.284538 | orchestrator | 2025-03-23 13:22:58.284552 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-23 13:22:58.284573 | orchestrator | Sunday 23 March 2025 13:22:34 +0000 (0:00:02.755) 0:00:59.318 ********** 2025-03-23 13:22:58.284587 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.284601 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:22:58.284615 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:22:58.284629 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:22:58.284642 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:22:58.284656 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:22:58.284670 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:22:58.284683 | orchestrator | 2025-03-23 13:22:58.284697 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-23 13:22:58.284716 | orchestrator | Sunday 23 March 2025 13:22:37 +0000 (0:00:03.069) 0:01:02.388 ********** 2025-03-23 13:22:58.284730 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.284744 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:22:58.284757 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:22:58.284771 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:22:58.284791 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:22:58.284806 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:22:58.284819 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:22:58.284833 | orchestrator | 2025-03-23 13:22:58.284847 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-23 13:22:58.284861 | orchestrator | Sunday 23 March 2025 13:22:39 +0000 (0:00:02.067) 0:01:04.456 ********** 2025-03-23 13:22:58.284875 | orchestrator | ok: [testbed-manager] 2025-03-23 13:22:58.284888 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:22:58.284902 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:22:58.284916 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:22:58.284929 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:22:58.284943 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:22:58.284956 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:22:58.284970 | orchestrator | 2025-03-23 13:22:58.284984 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-23 13:22:58.284998 | orchestrator | Sunday 23 March 2025 13:22:42 +0000 (0:00:02.924) 0:01:07.380 ********** 2025-03-23 13:22:58.285012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-23 13:22:58.285027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:22:58.285041 | orchestrator | 2025-03-23 13:22:58.285055 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-23 13:22:58.285069 | orchestrator | Sunday 23 March 2025 13:22:46 +0000 (0:00:03.571) 0:01:10.951 ********** 2025-03-23 13:22:58.285083 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.285096 | orchestrator | 2025-03-23 13:22:58.285110 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-23 13:22:58.285124 | orchestrator | Sunday 23 March 2025 13:22:51 +0000 (0:00:05.332) 0:01:16.284 ********** 2025-03-23 13:22:58.285137 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:22:58.285152 | orchestrator | changed: [testbed-manager] 2025-03-23 13:22:58.285174 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:22:58.285189 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:22:58.285203 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:22:58.285217 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:22:58.285230 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:22:58.285244 | orchestrator | 2025-03-23 13:22:58.285258 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:22:58.285272 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285286 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285306 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285326 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285340 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285353 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285383 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:22:58.285398 | orchestrator | 2025-03-23 13:22:58.285412 | orchestrator | Sunday 23 March 2025 13:22:55 +0000 (0:00:04.122) 0:01:20.407 ********** 2025-03-23 13:22:58.285426 | orchestrator | =============================================================================== 2025-03-23 13:22:58.285440 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.68s 2025-03-23 13:22:58.285454 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.18s 2025-03-23 13:22:58.285467 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.15s 2025-03-23 13:22:58.285481 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 5.33s 2025-03-23 13:22:58.285495 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.80s 2025-03-23 13:22:58.285509 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.12s 2025-03-23 13:22:58.285522 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.57s 2025-03-23 13:22:58.285536 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.28s 2025-03-23 13:22:58.285550 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.07s 2025-03-23 13:22:58.285564 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.92s 2025-03-23 13:22:58.285578 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.83s 2025-03-23 13:22:58.285591 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.77s 2025-03-23 13:22:58.285606 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.76s 2025-03-23 13:22:58.285619 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.32s 2025-03-23 13:22:58.285640 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.07s 2025-03-23 13:22:58.285771 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.80s 2025-03-23 13:22:58.285792 | orchestrator | 2025-03-23 13:22:58 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:22:58.285811 | orchestrator | 2025-03-23 13:22:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:01.355821 | orchestrator | 2025-03-23 13:22:58 | INFO  | Task 00d9da82-128a-4edd-9913-cc5250222d82 is in state SUCCESS 2025-03-23 13:23:01.355927 | orchestrator | 2025-03-23 13:22:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:01.355961 | orchestrator | 2025-03-23 13:23:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:01.356213 | orchestrator | 2025-03-23 13:23:01 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:01.358094 | orchestrator | 2025-03-23 13:23:01 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:01.358215 | orchestrator | 2025-03-23 13:23:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:04.401198 | orchestrator | 2025-03-23 13:23:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:04.405817 | orchestrator | 2025-03-23 13:23:04 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:04.409503 | orchestrator | 2025-03-23 13:23:04 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:07.478258 | orchestrator | 2025-03-23 13:23:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:07.478372 | orchestrator | 2025-03-23 13:23:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:07.481714 | orchestrator | 2025-03-23 13:23:07 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:07.484607 | orchestrator | 2025-03-23 13:23:07 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:07.485119 | orchestrator | 2025-03-23 13:23:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:10.546694 | orchestrator | 2025-03-23 13:23:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:10.549061 | orchestrator | 2025-03-23 13:23:10 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:10.549481 | orchestrator | 2025-03-23 13:23:10 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:10.550740 | orchestrator | 2025-03-23 13:23:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:13.601162 | orchestrator | 2025-03-23 13:23:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:13.602091 | orchestrator | 2025-03-23 13:23:13 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:13.602702 | orchestrator | 2025-03-23 13:23:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:13.602943 | orchestrator | 2025-03-23 13:23:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:16.670501 | orchestrator | 2025-03-23 13:23:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:16.671084 | orchestrator | 2025-03-23 13:23:16 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:16.671782 | orchestrator | 2025-03-23 13:23:16 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:16.671925 | orchestrator | 2025-03-23 13:23:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:19.729873 | orchestrator | 2025-03-23 13:23:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:19.730108 | orchestrator | 2025-03-23 13:23:19 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:19.734547 | orchestrator | 2025-03-23 13:23:19 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:22.788987 | orchestrator | 2025-03-23 13:23:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:22.789109 | orchestrator | 2025-03-23 13:23:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:22.789944 | orchestrator | 2025-03-23 13:23:22 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:22.791620 | orchestrator | 2025-03-23 13:23:22 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:25.832834 | orchestrator | 2025-03-23 13:23:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:25.832946 | orchestrator | 2025-03-23 13:23:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:25.833053 | orchestrator | 2025-03-23 13:23:25 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:25.835170 | orchestrator | 2025-03-23 13:23:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:28.882832 | orchestrator | 2025-03-23 13:23:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:28.882952 | orchestrator | 2025-03-23 13:23:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:31.941616 | orchestrator | 2025-03-23 13:23:28 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:31.941710 | orchestrator | 2025-03-23 13:23:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:31.941727 | orchestrator | 2025-03-23 13:23:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:31.941775 | orchestrator | 2025-03-23 13:23:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:31.945719 | orchestrator | 2025-03-23 13:23:31 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:31.951757 | orchestrator | 2025-03-23 13:23:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:31.952956 | orchestrator | 2025-03-23 13:23:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:35.015489 | orchestrator | 2025-03-23 13:23:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:35.017788 | orchestrator | 2025-03-23 13:23:35 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:35.018553 | orchestrator | 2025-03-23 13:23:35 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:38.049561 | orchestrator | 2025-03-23 13:23:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:38.049681 | orchestrator | 2025-03-23 13:23:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:38.052059 | orchestrator | 2025-03-23 13:23:38 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:38.052138 | orchestrator | 2025-03-23 13:23:38 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:38.052160 | orchestrator | 2025-03-23 13:23:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:41.101232 | orchestrator | 2025-03-23 13:23:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:41.101493 | orchestrator | 2025-03-23 13:23:41 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:41.104350 | orchestrator | 2025-03-23 13:23:41 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:44.160269 | orchestrator | 2025-03-23 13:23:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:44.160383 | orchestrator | 2025-03-23 13:23:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:44.162215 | orchestrator | 2025-03-23 13:23:44 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:44.164760 | orchestrator | 2025-03-23 13:23:44 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:47.215298 | orchestrator | 2025-03-23 13:23:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:47.215462 | orchestrator | 2025-03-23 13:23:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:47.216671 | orchestrator | 2025-03-23 13:23:47 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:47.216704 | orchestrator | 2025-03-23 13:23:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:50.270803 | orchestrator | 2025-03-23 13:23:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:50.270899 | orchestrator | 2025-03-23 13:23:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:50.274406 | orchestrator | 2025-03-23 13:23:50 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:50.277590 | orchestrator | 2025-03-23 13:23:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:53.326877 | orchestrator | 2025-03-23 13:23:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:53.326969 | orchestrator | 2025-03-23 13:23:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:53.330555 | orchestrator | 2025-03-23 13:23:53 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:53.330623 | orchestrator | 2025-03-23 13:23:53 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:53.330645 | orchestrator | 2025-03-23 13:23:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:56.440680 | orchestrator | 2025-03-23 13:23:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:56.444442 | orchestrator | 2025-03-23 13:23:56 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:23:56.447089 | orchestrator | 2025-03-23 13:23:56 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:23:59.495536 | orchestrator | 2025-03-23 13:23:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:23:59.495646 | orchestrator | 2025-03-23 13:23:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:23:59.498772 | orchestrator | 2025-03-23 13:23:59 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:24:02.553270 | orchestrator | 2025-03-23 13:23:59 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:02.553359 | orchestrator | 2025-03-23 13:23:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:02.553389 | orchestrator | 2025-03-23 13:24:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:02.554476 | orchestrator | 2025-03-23 13:24:02 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state STARTED 2025-03-23 13:24:02.555882 | orchestrator | 2025-03-23 13:24:02 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:05.625234 | orchestrator | 2025-03-23 13:24:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:05.625352 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:05.626215 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:05.630952 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:05.631843 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:05.634960 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task d411ebdb-b351-4370-a50f-9921d1292800 is in state SUCCESS 2025-03-23 13:24:05.637243 | orchestrator | 2025-03-23 13:24:05.637314 | orchestrator | 2025-03-23 13:24:05.637330 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-23 13:24:05.637345 | orchestrator | 2025-03-23 13:24:05.637359 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-23 13:24:05.637373 | orchestrator | Sunday 23 March 2025 13:21:56 +0000 (0:00:00.570) 0:00:00.570 ********** 2025-03-23 13:24:05.637387 | orchestrator | ok: [testbed-manager] 2025-03-23 13:24:05.637402 | orchestrator | 2025-03-23 13:24:05.637443 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-23 13:24:05.637459 | orchestrator | Sunday 23 March 2025 13:21:58 +0000 (0:00:01.674) 0:00:02.244 ********** 2025-03-23 13:24:05.637473 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-23 13:24:05.637494 | orchestrator | 2025-03-23 13:24:05.637508 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-23 13:24:05.637522 | orchestrator | Sunday 23 March 2025 13:21:59 +0000 (0:00:01.200) 0:00:03.445 ********** 2025-03-23 13:24:05.637536 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.637550 | orchestrator | 2025-03-23 13:24:05.637564 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-23 13:24:05.637577 | orchestrator | Sunday 23 March 2025 13:22:01 +0000 (0:00:02.323) 0:00:05.769 ********** 2025-03-23 13:24:05.637591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-23 13:24:05.637606 | orchestrator | ok: [testbed-manager] 2025-03-23 13:24:05.637666 | orchestrator | 2025-03-23 13:24:05.637681 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-23 13:24:05.637724 | orchestrator | Sunday 23 March 2025 13:22:50 +0000 (0:00:48.776) 0:00:54.545 ********** 2025-03-23 13:24:05.637739 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.637776 | orchestrator | 2025-03-23 13:24:05.637792 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:24:05.637806 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:05.637822 | orchestrator | 2025-03-23 13:24:05.637924 | orchestrator | Sunday 23 March 2025 13:22:55 +0000 (0:00:04.515) 0:00:59.061 ********** 2025-03-23 13:24:05.637941 | orchestrator | =============================================================================== 2025-03-23 13:24:05.637957 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.78s 2025-03-23 13:24:05.637972 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.52s 2025-03-23 13:24:05.637988 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.32s 2025-03-23 13:24:05.638004 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.67s 2025-03-23 13:24:05.638100 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.20s 2025-03-23 13:24:05.638120 | orchestrator | 2025-03-23 13:24:05.638136 | orchestrator | 2025-03-23 13:24:05.638152 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-23 13:24:05.638167 | orchestrator | 2025-03-23 13:24:05.638183 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-23 13:24:05.638197 | orchestrator | Sunday 23 March 2025 13:21:31 +0000 (0:00:00.349) 0:00:00.349 ********** 2025-03-23 13:24:05.638211 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:24:05.638227 | orchestrator | 2025-03-23 13:24:05.638240 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-23 13:24:05.638255 | orchestrator | Sunday 23 March 2025 13:21:32 +0000 (0:00:01.728) 0:00:02.078 ********** 2025-03-23 13:24:05.638269 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638282 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638307 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638321 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638335 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638348 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638364 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638378 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638392 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638406 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 13:24:05.638439 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638454 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638468 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638487 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638502 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 13:24:05.638515 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638529 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638555 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638570 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638584 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638598 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 13:24:05.638611 | orchestrator | 2025-03-23 13:24:05.638625 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-23 13:24:05.638639 | orchestrator | Sunday 23 March 2025 13:21:36 +0000 (0:00:04.068) 0:00:06.147 ********** 2025-03-23 13:24:05.638684 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:24:05.638706 | orchestrator | 2025-03-23 13:24:05.638720 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-23 13:24:05.638734 | orchestrator | Sunday 23 March 2025 13:21:39 +0000 (0:00:02.138) 0:00:08.286 ********** 2025-03-23 13:24:05.638752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638809 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.638907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.638922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.638944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.638959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.638973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.638995 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639085 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.639141 | orchestrator | 2025-03-23 13:24:05.639155 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-23 13:24:05.639169 | orchestrator | Sunday 23 March 2025 13:21:44 +0000 (0:00:05.123) 0:00:13.410 ********** 2025-03-23 13:24:05.639200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639216 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639257 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.639271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639315 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.639330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639400 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.639449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639501 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.639515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639559 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.639581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639632 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.639646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639689 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.639703 | orchestrator | 2025-03-23 13:24:05.639717 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-23 13:24:05.639732 | orchestrator | Sunday 23 March 2025 13:21:47 +0000 (0:00:02.923) 0:00:16.333 ********** 2025-03-23 13:24:05.639746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639803 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.639817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.639943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.639972 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.639986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.640007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640036 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.640050 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.640064 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.640078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.640106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640136 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.640150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 13:24:05.640165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.640194 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.640208 | orchestrator | 2025-03-23 13:24:05.640222 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-23 13:24:05.640236 | orchestrator | Sunday 23 March 2025 13:21:51 +0000 (0:00:04.057) 0:00:20.390 ********** 2025-03-23 13:24:05.640250 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.640264 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.640278 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.640292 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.640306 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.640320 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.640334 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.640348 | orchestrator | 2025-03-23 13:24:05.640362 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-23 13:24:05.640376 | orchestrator | Sunday 23 March 2025 13:21:52 +0000 (0:00:01.624) 0:00:22.014 ********** 2025-03-23 13:24:05.640390 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.640404 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.640444 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.640459 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.640473 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.640487 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.640501 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.640514 | orchestrator | 2025-03-23 13:24:05.640529 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-23 13:24:05.640543 | orchestrator | Sunday 23 March 2025 13:21:54 +0000 (0:00:01.646) 0:00:23.661 ********** 2025-03-23 13:24:05.640557 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:05.640571 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.640584 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.640598 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.640612 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.640626 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.640639 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.640653 | orchestrator | 2025-03-23 13:24:05.640667 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-23 13:24:05.640681 | orchestrator | Sunday 23 March 2025 13:22:24 +0000 (0:00:29.794) 0:00:53.456 ********** 2025-03-23 13:24:05.640695 | orchestrator | ok: [testbed-manager] 2025-03-23 13:24:05.640715 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:05.640730 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:24:05.640744 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:24:05.640758 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:24:05.640772 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:24:05.640791 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:24:05.640806 | orchestrator | 2025-03-23 13:24:05.640820 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-23 13:24:05.640834 | orchestrator | Sunday 23 March 2025 13:22:27 +0000 (0:00:03.278) 0:00:56.734 ********** 2025-03-23 13:24:05.640848 | orchestrator | ok: [testbed-manager] 2025-03-23 13:24:05.640862 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:05.640876 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:24:05.640889 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:24:05.640903 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:24:05.640917 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:24:05.640931 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:24:05.640944 | orchestrator | 2025-03-23 13:24:05.640959 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-23 13:24:05.640972 | orchestrator | Sunday 23 March 2025 13:22:29 +0000 (0:00:01.567) 0:00:58.303 ********** 2025-03-23 13:24:05.640986 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.641000 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.641014 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.641028 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.641041 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.641055 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.641069 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.641083 | orchestrator | 2025-03-23 13:24:05.641097 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-23 13:24:05.641111 | orchestrator | Sunday 23 March 2025 13:22:30 +0000 (0:00:01.840) 0:01:00.143 ********** 2025-03-23 13:24:05.641125 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:24:05.641139 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:24:05.641153 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:24:05.641166 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:24:05.641180 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:24:05.641194 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:24:05.641208 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:24:05.641222 | orchestrator | 2025-03-23 13:24:05.641236 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-23 13:24:05.641249 | orchestrator | Sunday 23 March 2025 13:22:32 +0000 (0:00:01.128) 0:01:01.272 ********** 2025-03-23 13:24:05.641270 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641341 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641391 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.641502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.641660 | orchestrator | 2025-03-23 13:24:05.641675 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-23 13:24:05.641689 | orchestrator | Sunday 23 March 2025 13:22:38 +0000 (0:00:06.764) 0:01:08.037 ********** 2025-03-23 13:24:05.641703 | orchestrator | [WARNING]: Skipped 2025-03-23 13:24:05.641717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-23 13:24:05.641732 | orchestrator | to this access issue: 2025-03-23 13:24:05.641752 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-23 13:24:05.641766 | orchestrator | directory 2025-03-23 13:24:05.641780 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:24:05.641794 | orchestrator | 2025-03-23 13:24:05.641808 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-23 13:24:05.641822 | orchestrator | Sunday 23 March 2025 13:22:39 +0000 (0:00:01.149) 0:01:09.186 ********** 2025-03-23 13:24:05.641836 | orchestrator | [WARNING]: Skipped 2025-03-23 13:24:05.641850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-23 13:24:05.641863 | orchestrator | to this access issue: 2025-03-23 13:24:05.641878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-23 13:24:05.641891 | orchestrator | directory 2025-03-23 13:24:05.641905 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:24:05.641919 | orchestrator | 2025-03-23 13:24:05.641933 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-23 13:24:05.641947 | orchestrator | Sunday 23 March 2025 13:22:40 +0000 (0:00:00.631) 0:01:09.818 ********** 2025-03-23 13:24:05.641961 | orchestrator | [WARNING]: Skipped 2025-03-23 13:24:05.641975 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-23 13:24:05.641989 | orchestrator | to this access issue: 2025-03-23 13:24:05.642002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-23 13:24:05.642043 | orchestrator | directory 2025-03-23 13:24:05.642060 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:24:05.642075 | orchestrator | 2025-03-23 13:24:05.642089 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-23 13:24:05.642103 | orchestrator | Sunday 23 March 2025 13:22:41 +0000 (0:00:00.825) 0:01:10.644 ********** 2025-03-23 13:24:05.642117 | orchestrator | [WARNING]: Skipped 2025-03-23 13:24:05.642131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-23 13:24:05.642146 | orchestrator | to this access issue: 2025-03-23 13:24:05.642160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-23 13:24:05.642173 | orchestrator | directory 2025-03-23 13:24:05.642187 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:24:05.642201 | orchestrator | 2025-03-23 13:24:05.642215 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-23 13:24:05.642229 | orchestrator | Sunday 23 March 2025 13:22:41 +0000 (0:00:00.551) 0:01:11.195 ********** 2025-03-23 13:24:05.642243 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.642257 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.642271 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.642285 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.642299 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.642313 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.642327 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.642341 | orchestrator | 2025-03-23 13:24:05.642355 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-23 13:24:05.642369 | orchestrator | Sunday 23 March 2025 13:22:48 +0000 (0:00:06.864) 0:01:18.060 ********** 2025-03-23 13:24:05.642383 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642412 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642495 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642517 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642531 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 13:24:05.642545 | orchestrator | 2025-03-23 13:24:05.642559 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-23 13:24:05.642573 | orchestrator | Sunday 23 March 2025 13:22:54 +0000 (0:00:05.906) 0:01:23.966 ********** 2025-03-23 13:24:05.642587 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.642601 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.642615 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.642630 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.642644 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.642664 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.642679 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.642693 | orchestrator | 2025-03-23 13:24:05.642707 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-23 13:24:05.642721 | orchestrator | Sunday 23 March 2025 13:22:59 +0000 (0:00:04.482) 0:01:28.448 ********** 2025-03-23 13:24:05.642736 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642756 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.642772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.642801 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.642827 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.642843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.642881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.642915 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.642950 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.642965 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.642994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.643007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643020 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:24:05.643046 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643062 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643082 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643095 | orchestrator | 2025-03-23 13:24:05.643108 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-23 13:24:05.643120 | orchestrator | Sunday 23 March 2025 13:23:02 +0000 (0:00:03.371) 0:01:31.820 ********** 2025-03-23 13:24:05.643133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643146 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643158 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643183 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643195 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643207 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 13:24:05.643219 | orchestrator | 2025-03-23 13:24:05.643232 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-23 13:24:05.643254 | orchestrator | Sunday 23 March 2025 13:23:06 +0000 (0:00:03.610) 0:01:35.431 ********** 2025-03-23 13:24:05.643267 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643292 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643317 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643329 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643341 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 13:24:05.643353 | orchestrator | 2025-03-23 13:24:05.643366 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-23 13:24:05.643378 | orchestrator | Sunday 23 March 2025 13:23:09 +0000 (0:00:03.232) 0:01:38.663 ********** 2025-03-23 13:24:05.643394 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643459 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643543 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 13:24:05.643686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:24:05.643770 | orchestrator | 2025-03-23 13:24:05.643782 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-23 13:24:05.643795 | orchestrator | Sunday 23 March 2025 13:23:14 +0000 (0:00:05.506) 0:01:44.169 ********** 2025-03-23 13:24:05.643807 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.643826 | orchestrator | cha2025-03-23 13:24:05 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:05.643839 | orchestrator | 2025-03-23 13:24:05 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:05.643852 | orchestrator | nged: [testbed-node-0] 2025-03-23 13:24:05.643864 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.643876 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.643889 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.643901 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.643913 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.643925 | orchestrator | 2025-03-23 13:24:05.643938 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-23 13:24:05.643950 | orchestrator | Sunday 23 March 2025 13:23:17 +0000 (0:00:02.430) 0:01:46.600 ********** 2025-03-23 13:24:05.643962 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.643978 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.643996 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.644009 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.644022 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.644034 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.644046 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.644058 | orchestrator | 2025-03-23 13:24:05.644070 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644082 | orchestrator | Sunday 23 March 2025 13:23:18 +0000 (0:00:01.543) 0:01:48.144 ********** 2025-03-23 13:24:05.644094 | orchestrator | 2025-03-23 13:24:05.644107 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644119 | orchestrator | Sunday 23 March 2025 13:23:18 +0000 (0:00:00.052) 0:01:48.196 ********** 2025-03-23 13:24:05.644131 | orchestrator | 2025-03-23 13:24:05.644143 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644155 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.071) 0:01:48.268 ********** 2025-03-23 13:24:05.644167 | orchestrator | 2025-03-23 13:24:05.644179 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644192 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.075) 0:01:48.343 ********** 2025-03-23 13:24:05.644204 | orchestrator | 2025-03-23 13:24:05.644217 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644229 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.197) 0:01:48.541 ********** 2025-03-23 13:24:05.644242 | orchestrator | 2025-03-23 13:24:05.644254 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644266 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.055) 0:01:48.597 ********** 2025-03-23 13:24:05.644278 | orchestrator | 2025-03-23 13:24:05.644291 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 13:24:05.644303 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.049) 0:01:48.646 ********** 2025-03-23 13:24:05.644315 | orchestrator | 2025-03-23 13:24:05.644327 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-23 13:24:05.644340 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.067) 0:01:48.713 ********** 2025-03-23 13:24:05.644352 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.644364 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.644376 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.644389 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.644401 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.644454 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.644470 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.644482 | orchestrator | 2025-03-23 13:24:05.644495 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-23 13:24:05.644507 | orchestrator | Sunday 23 March 2025 13:23:27 +0000 (0:00:08.342) 0:01:57.056 ********** 2025-03-23 13:24:05.644520 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.644532 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.644545 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.644557 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.644569 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.644582 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.644594 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.644606 | orchestrator | 2025-03-23 13:24:05.644619 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-23 13:24:05.644631 | orchestrator | Sunday 23 March 2025 13:23:50 +0000 (0:00:23.028) 0:02:20.084 ********** 2025-03-23 13:24:05.644643 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:24:05.644655 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:05.644668 | orchestrator | ok: [testbed-manager] 2025-03-23 13:24:05.644680 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:24:05.644693 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:24:05.644705 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:24:05.644724 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:24:05.644736 | orchestrator | 2025-03-23 13:24:05.644749 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-23 13:24:05.644761 | orchestrator | Sunday 23 March 2025 13:23:53 +0000 (0:00:02.805) 0:02:22.890 ********** 2025-03-23 13:24:05.644774 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:05.644786 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:05.644799 | orchestrator | changed: [testbed-manager] 2025-03-23 13:24:05.644811 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:24:05.644823 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:24:05.644835 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:24:05.644847 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:05.644860 | orchestrator | 2025-03-23 13:24:05.644872 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:24:05.644884 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:05.644897 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:05.644917 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:08.700026 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:08.700166 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:08.700188 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:08.700203 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:24:08.700218 | orchestrator | 2025-03-23 13:24:08.700232 | orchestrator | 2025-03-23 13:24:08.700247 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:24:08.700263 | orchestrator | Sunday 23 March 2025 13:24:03 +0000 (0:00:10.026) 0:02:32.917 ********** 2025-03-23 13:24:08.700277 | orchestrator | =============================================================================== 2025-03-23 13:24:08.700291 | orchestrator | common : Ensure fluentd image is present for label check --------------- 29.80s 2025-03-23 13:24:08.700304 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.03s 2025-03-23 13:24:08.700337 | orchestrator | common : Restart cron container ---------------------------------------- 10.03s 2025-03-23 13:24:08.700351 | orchestrator | common : Restart fluentd container -------------------------------------- 8.34s 2025-03-23 13:24:08.700365 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 6.86s 2025-03-23 13:24:08.700379 | orchestrator | common : Copying over config.json files for services -------------------- 6.76s 2025-03-23 13:24:08.700393 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.91s 2025-03-23 13:24:08.700407 | orchestrator | common : Check common containers ---------------------------------------- 5.51s 2025-03-23 13:24:08.700469 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.12s 2025-03-23 13:24:08.700484 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.48s 2025-03-23 13:24:08.700498 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.07s 2025-03-23 13:24:08.700512 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.06s 2025-03-23 13:24:08.700526 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.61s 2025-03-23 13:24:08.700567 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.37s 2025-03-23 13:24:08.700583 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.28s 2025-03-23 13:24:08.700599 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.23s 2025-03-23 13:24:08.700615 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.92s 2025-03-23 13:24:08.700632 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.81s 2025-03-23 13:24:08.700648 | orchestrator | common : Creating log volume -------------------------------------------- 2.43s 2025-03-23 13:24:08.700662 | orchestrator | common : include_tasks -------------------------------------------------- 2.14s 2025-03-23 13:24:08.700679 | orchestrator | 2025-03-23 13:24:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:08.700711 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:08.702367 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:08.703460 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:08.704319 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:08.704842 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:08.705710 | orchestrator | 2025-03-23 13:24:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:11.759520 | orchestrator | 2025-03-23 13:24:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:11.759689 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:11.760062 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:11.760869 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:11.761940 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:11.762768 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:11.763463 | orchestrator | 2025-03-23 13:24:11 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:14.797118 | orchestrator | 2025-03-23 13:24:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:14.797256 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:14.798103 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:14.801828 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:14.802512 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:14.803614 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:14.805044 | orchestrator | 2025-03-23 13:24:14 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:17.848332 | orchestrator | 2025-03-23 13:24:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:17.848481 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:20.901024 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:20.901127 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:20.901146 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:20.901161 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:20.901175 | orchestrator | 2025-03-23 13:24:17 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:20.901190 | orchestrator | 2025-03-23 13:24:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:20.901221 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:20.914388 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:23.961781 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:23.961886 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:23.961903 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:23.961917 | orchestrator | 2025-03-23 13:24:20 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:23.961931 | orchestrator | 2025-03-23 13:24:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:23.961960 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:23.962283 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:23.966270 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:23.966586 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:23.967481 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:23.972915 | orchestrator | 2025-03-23 13:24:23 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:27.030147 | orchestrator | 2025-03-23 13:24:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:27.030279 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:27.032925 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:27.033849 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:27.033881 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:27.038695 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:27.042779 | orchestrator | 2025-03-23 13:24:27 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:30.146870 | orchestrator | 2025-03-23 13:24:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:30.146996 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:33.187092 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:33.187207 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:33.187219 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:33.187228 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state STARTED 2025-03-23 13:24:33.187238 | orchestrator | 2025-03-23 13:24:30 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:33.187247 | orchestrator | 2025-03-23 13:24:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:33.187269 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:33.188560 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:33.191643 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:33.193213 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:33.195050 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task 5ff3ed9c-c2ba-41a5-9491-613b8060aff4 is in state SUCCESS 2025-03-23 13:24:33.196984 | orchestrator | 2025-03-23 13:24:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:33.197324 | orchestrator | 2025-03-23 13:24:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:36.249601 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:36.252565 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:36.253550 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:36.260455 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:36.261319 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:36.261345 | orchestrator | 2025-03-23 13:24:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:36.261841 | orchestrator | 2025-03-23 13:24:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:39.332231 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:39.339046 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:39.344206 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:39.354224 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:39.355635 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:39.358386 | orchestrator | 2025-03-23 13:24:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:39.358570 | orchestrator | 2025-03-23 13:24:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:42.436324 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:42.439612 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:42.443591 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:42.450886 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state STARTED 2025-03-23 13:24:42.454903 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:42.458647 | orchestrator | 2025-03-23 13:24:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:42.459073 | orchestrator | 2025-03-23 13:24:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:45.533879 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:45.539151 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:45.540551 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:45.545957 | orchestrator | 2025-03-23 13:24:45.545995 | orchestrator | 2025-03-23 13:24:45.546011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:24:45.546072 | orchestrator | 2025-03-23 13:24:45.546088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:24:45.546103 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:00.886) 0:00:00.886 ********** 2025-03-23 13:24:45.546117 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:45.546132 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:24:45.546146 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:24:45.546160 | orchestrator | 2025-03-23 13:24:45.546175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:24:45.546189 | orchestrator | Sunday 23 March 2025 13:24:12 +0000 (0:00:00.970) 0:00:01.856 ********** 2025-03-23 13:24:45.546204 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-23 13:24:45.546218 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-23 13:24:45.546232 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-23 13:24:45.546246 | orchestrator | 2025-03-23 13:24:45.546260 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-23 13:24:45.546274 | orchestrator | 2025-03-23 13:24:45.546287 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-23 13:24:45.546301 | orchestrator | Sunday 23 March 2025 13:24:12 +0000 (0:00:00.599) 0:00:02.456 ********** 2025-03-23 13:24:45.546316 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:24:45.546331 | orchestrator | 2025-03-23 13:24:45.546345 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-23 13:24:45.546358 | orchestrator | Sunday 23 March 2025 13:24:14 +0000 (0:00:01.648) 0:00:04.105 ********** 2025-03-23 13:24:45.546372 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-23 13:24:45.546386 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-23 13:24:45.546400 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-23 13:24:45.546414 | orchestrator | 2025-03-23 13:24:45.546428 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-23 13:24:45.546491 | orchestrator | Sunday 23 March 2025 13:24:15 +0000 (0:00:00.884) 0:00:04.989 ********** 2025-03-23 13:24:45.546506 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-23 13:24:45.546520 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-23 13:24:45.546534 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-23 13:24:45.546548 | orchestrator | 2025-03-23 13:24:45.546563 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-23 13:24:45.546604 | orchestrator | Sunday 23 March 2025 13:24:17 +0000 (0:00:02.435) 0:00:07.424 ********** 2025-03-23 13:24:45.546620 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:45.546649 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:45.546664 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:45.546680 | orchestrator | 2025-03-23 13:24:45.546700 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-23 13:24:45.546715 | orchestrator | Sunday 23 March 2025 13:24:21 +0000 (0:00:03.862) 0:00:11.286 ********** 2025-03-23 13:24:45.546731 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:45.546746 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:45.546762 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:45.546777 | orchestrator | 2025-03-23 13:24:45.546792 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:24:45.546807 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.546825 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.546841 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.546856 | orchestrator | 2025-03-23 13:24:45.546871 | orchestrator | 2025-03-23 13:24:45.546887 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:24:45.546902 | orchestrator | Sunday 23 March 2025 13:24:30 +0000 (0:00:09.003) 0:00:20.290 ********** 2025-03-23 13:24:45.546917 | orchestrator | =============================================================================== 2025-03-23 13:24:45.546931 | orchestrator | memcached : Restart memcached container --------------------------------- 9.00s 2025-03-23 13:24:45.546944 | orchestrator | memcached : Check memcached container ----------------------------------- 3.86s 2025-03-23 13:24:45.546958 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.44s 2025-03-23 13:24:45.546972 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.65s 2025-03-23 13:24:45.546986 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2025-03-23 13:24:45.546999 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.88s 2025-03-23 13:24:45.547013 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-03-23 13:24:45.547027 | orchestrator | 2025-03-23 13:24:45.547041 | orchestrator | 2025-03-23 13:24:45.547055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:24:45.547068 | orchestrator | 2025-03-23 13:24:45.547082 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:24:45.547096 | orchestrator | Sunday 23 March 2025 13:24:10 +0000 (0:00:00.632) 0:00:00.632 ********** 2025-03-23 13:24:45.547110 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:24:45.547124 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:24:45.547138 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:24:45.547152 | orchestrator | 2025-03-23 13:24:45.547166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:24:45.547190 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:00.533) 0:00:01.165 ********** 2025-03-23 13:24:45.547205 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-23 13:24:45.547219 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-23 13:24:45.547233 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-23 13:24:45.547247 | orchestrator | 2025-03-23 13:24:45.547261 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-23 13:24:45.547275 | orchestrator | 2025-03-23 13:24:45.547288 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-23 13:24:45.547302 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:00.705) 0:00:01.871 ********** 2025-03-23 13:24:45.547324 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:24:45.547338 | orchestrator | 2025-03-23 13:24:45.547352 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-23 13:24:45.547366 | orchestrator | Sunday 23 March 2025 13:24:13 +0000 (0:00:01.375) 0:00:03.246 ********** 2025-03-23 13:24:45.547383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547524 | orchestrator | 2025-03-23 13:24:45.547538 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-23 13:24:45.547552 | orchestrator | Sunday 23 March 2025 13:24:15 +0000 (0:00:02.156) 0:00:05.403 ********** 2025-03-23 13:24:45.547567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547682 | orchestrator | 2025-03-23 13:24:45.547697 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-23 13:24:45.547711 | orchestrator | Sunday 23 March 2025 13:24:18 +0000 (0:00:03.371) 0:00:08.775 ********** 2025-03-23 13:24:45.547725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547828 | orchestrator | 2025-03-23 13:24:45.547843 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-23 13:24:45.547857 | orchestrator | Sunday 23 March 2025 13:24:23 +0000 (0:00:04.713) 0:00:13.489 ********** 2025-03-23 13:24:45.547871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.547957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 13:24:45.549193 | orchestrator | 2025-03-23 13:24:45.549221 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 13:24:45.549236 | orchestrator | Sunday 23 March 2025 13:24:27 +0000 (0:00:03.973) 0:00:17.462 ********** 2025-03-23 13:24:45.549250 | orchestrator | 2025-03-23 13:24:45.549264 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 13:24:45.549279 | orchestrator | Sunday 23 March 2025 13:24:27 +0000 (0:00:00.203) 0:00:17.666 ********** 2025-03-23 13:24:45.549293 | orchestrator | 2025-03-23 13:24:45.549307 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 13:24:45.549320 | orchestrator | Sunday 23 March 2025 13:24:27 +0000 (0:00:00.068) 0:00:17.734 ********** 2025-03-23 13:24:45.549334 | orchestrator | 2025-03-23 13:24:45.549348 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-23 13:24:45.549362 | orchestrator | Sunday 23 March 2025 13:24:28 +0000 (0:00:00.274) 0:00:18.009 ********** 2025-03-23 13:24:45.549376 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:45.549390 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:45.549403 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:45.549417 | orchestrator | 2025-03-23 13:24:45.549453 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-23 13:24:45.549468 | orchestrator | Sunday 23 March 2025 13:24:38 +0000 (0:00:10.143) 0:00:28.152 ********** 2025-03-23 13:24:45.549482 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:24:45.549497 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:24:45.549519 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:24:45.549533 | orchestrator | 2025-03-23 13:24:45.549547 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:24:45.549561 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.549576 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.549590 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:24:45.549604 | orchestrator | 2025-03-23 13:24:45.549618 | orchestrator | 2025-03-23 13:24:45.549632 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:24:45.549646 | orchestrator | Sunday 23 March 2025 13:24:43 +0000 (0:00:04.977) 0:00:33.130 ********** 2025-03-23 13:24:45.549660 | orchestrator | =============================================================================== 2025-03-23 13:24:45.549674 | orchestrator | redis : Restart redis container ---------------------------------------- 10.14s 2025-03-23 13:24:45.549687 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.98s 2025-03-23 13:24:45.549701 | orchestrator | redis : Copying over redis config files --------------------------------- 4.71s 2025-03-23 13:24:45.549715 | orchestrator | redis : Check redis containers ------------------------------------------ 3.97s 2025-03-23 13:24:45.549729 | orchestrator | redis : Copying over default config.json files -------------------------- 3.37s 2025-03-23 13:24:45.549742 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.16s 2025-03-23 13:24:45.549775 | orchestrator | redis : include_tasks --------------------------------------------------- 1.38s 2025-03-23 13:24:45.549789 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-03-23 13:24:45.549803 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.55s 2025-03-23 13:24:45.549817 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-03-23 13:24:45.549831 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task ec63e151-e93b-431d-a4da-c644bca675c7 is in state SUCCESS 2025-03-23 13:24:45.549851 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:45.550652 | orchestrator | 2025-03-23 13:24:45 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:45.552837 | orchestrator | 2025-03-23 13:24:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:48.613209 | orchestrator | 2025-03-23 13:24:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:48.613596 | orchestrator | 2025-03-23 13:24:48 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:48.614643 | orchestrator | 2025-03-23 13:24:48 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:48.615841 | orchestrator | 2025-03-23 13:24:48 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:48.616198 | orchestrator | 2025-03-23 13:24:48 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:51.664252 | orchestrator | 2025-03-23 13:24:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:51.664351 | orchestrator | 2025-03-23 13:24:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:51.664415 | orchestrator | 2025-03-23 13:24:51 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:51.664990 | orchestrator | 2025-03-23 13:24:51 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:51.665403 | orchestrator | 2025-03-23 13:24:51 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:51.666111 | orchestrator | 2025-03-23 13:24:51 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:51.666595 | orchestrator | 2025-03-23 13:24:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:54.703412 | orchestrator | 2025-03-23 13:24:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:54.708118 | orchestrator | 2025-03-23 13:24:54 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:54.709269 | orchestrator | 2025-03-23 13:24:54 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:54.709301 | orchestrator | 2025-03-23 13:24:54 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:54.713212 | orchestrator | 2025-03-23 13:24:54 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:24:57.746766 | orchestrator | 2025-03-23 13:24:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:24:57.746881 | orchestrator | 2025-03-23 13:24:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:24:57.747332 | orchestrator | 2025-03-23 13:24:57 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:24:57.750510 | orchestrator | 2025-03-23 13:24:57 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:24:57.751567 | orchestrator | 2025-03-23 13:24:57 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:24:57.752380 | orchestrator | 2025-03-23 13:24:57 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:00.814534 | orchestrator | 2025-03-23 13:24:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:00.814651 | orchestrator | 2025-03-23 13:25:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:00.815150 | orchestrator | 2025-03-23 13:25:00 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:00.816555 | orchestrator | 2025-03-23 13:25:00 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:00.817688 | orchestrator | 2025-03-23 13:25:00 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:00.818690 | orchestrator | 2025-03-23 13:25:00 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:03.861865 | orchestrator | 2025-03-23 13:25:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:03.861987 | orchestrator | 2025-03-23 13:25:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:03.863082 | orchestrator | 2025-03-23 13:25:03 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:03.864132 | orchestrator | 2025-03-23 13:25:03 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:03.864164 | orchestrator | 2025-03-23 13:25:03 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:03.867758 | orchestrator | 2025-03-23 13:25:03 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:06.912194 | orchestrator | 2025-03-23 13:25:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:06.912316 | orchestrator | 2025-03-23 13:25:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:06.914430 | orchestrator | 2025-03-23 13:25:06 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:06.920373 | orchestrator | 2025-03-23 13:25:06 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:06.923648 | orchestrator | 2025-03-23 13:25:06 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:06.923696 | orchestrator | 2025-03-23 13:25:06 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:09.980801 | orchestrator | 2025-03-23 13:25:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:09.980908 | orchestrator | 2025-03-23 13:25:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:09.981319 | orchestrator | 2025-03-23 13:25:09 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:09.983241 | orchestrator | 2025-03-23 13:25:09 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:09.984843 | orchestrator | 2025-03-23 13:25:09 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:09.985054 | orchestrator | 2025-03-23 13:25:09 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:13.022371 | orchestrator | 2025-03-23 13:25:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:13.022586 | orchestrator | 2025-03-23 13:25:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:13.022790 | orchestrator | 2025-03-23 13:25:13 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:13.023933 | orchestrator | 2025-03-23 13:25:13 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:13.029773 | orchestrator | 2025-03-23 13:25:13 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:13.034378 | orchestrator | 2025-03-23 13:25:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:16.091074 | orchestrator | 2025-03-23 13:25:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:16.091228 | orchestrator | 2025-03-23 13:25:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:16.096000 | orchestrator | 2025-03-23 13:25:16 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:16.096490 | orchestrator | 2025-03-23 13:25:16 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:16.097177 | orchestrator | 2025-03-23 13:25:16 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:16.097917 | orchestrator | 2025-03-23 13:25:16 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:19.151888 | orchestrator | 2025-03-23 13:25:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:19.152024 | orchestrator | 2025-03-23 13:25:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:19.152684 | orchestrator | 2025-03-23 13:25:19 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:19.154576 | orchestrator | 2025-03-23 13:25:19 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:19.157384 | orchestrator | 2025-03-23 13:25:19 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:19.159376 | orchestrator | 2025-03-23 13:25:19 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:22.216902 | orchestrator | 2025-03-23 13:25:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:22.217026 | orchestrator | 2025-03-23 13:25:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:22.219277 | orchestrator | 2025-03-23 13:25:22 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:22.219979 | orchestrator | 2025-03-23 13:25:22 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:22.220010 | orchestrator | 2025-03-23 13:25:22 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:22.221138 | orchestrator | 2025-03-23 13:25:22 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:22.221342 | orchestrator | 2025-03-23 13:25:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:25.272068 | orchestrator | 2025-03-23 13:25:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:25.272825 | orchestrator | 2025-03-23 13:25:25 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:25.273624 | orchestrator | 2025-03-23 13:25:25 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:25.275153 | orchestrator | 2025-03-23 13:25:25 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:25.278624 | orchestrator | 2025-03-23 13:25:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:28.331641 | orchestrator | 2025-03-23 13:25:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:28.331775 | orchestrator | 2025-03-23 13:25:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:28.335734 | orchestrator | 2025-03-23 13:25:28 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:28.338724 | orchestrator | 2025-03-23 13:25:28 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:28.340354 | orchestrator | 2025-03-23 13:25:28 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:28.345857 | orchestrator | 2025-03-23 13:25:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:31.398109 | orchestrator | 2025-03-23 13:25:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:31.398243 | orchestrator | 2025-03-23 13:25:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:31.402862 | orchestrator | 2025-03-23 13:25:31 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:31.403837 | orchestrator | 2025-03-23 13:25:31 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:31.403871 | orchestrator | 2025-03-23 13:25:31 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:31.404863 | orchestrator | 2025-03-23 13:25:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:31.404944 | orchestrator | 2025-03-23 13:25:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:34.439838 | orchestrator | 2025-03-23 13:25:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:34.441808 | orchestrator | 2025-03-23 13:25:34 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:34.442105 | orchestrator | 2025-03-23 13:25:34 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:34.442135 | orchestrator | 2025-03-23 13:25:34 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:34.442155 | orchestrator | 2025-03-23 13:25:34 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:37.483345 | orchestrator | 2025-03-23 13:25:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:37.483512 | orchestrator | 2025-03-23 13:25:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:37.483946 | orchestrator | 2025-03-23 13:25:37 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:37.484733 | orchestrator | 2025-03-23 13:25:37 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:37.485523 | orchestrator | 2025-03-23 13:25:37 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:37.487765 | orchestrator | 2025-03-23 13:25:37 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:40.556034 | orchestrator | 2025-03-23 13:25:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:40.556152 | orchestrator | 2025-03-23 13:25:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:40.561495 | orchestrator | 2025-03-23 13:25:40 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:40.563029 | orchestrator | 2025-03-23 13:25:40 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:40.564313 | orchestrator | 2025-03-23 13:25:40 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:40.566712 | orchestrator | 2025-03-23 13:25:40 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:43.609910 | orchestrator | 2025-03-23 13:25:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:43.610095 | orchestrator | 2025-03-23 13:25:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:43.610200 | orchestrator | 2025-03-23 13:25:43 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:43.610224 | orchestrator | 2025-03-23 13:25:43 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state STARTED 2025-03-23 13:25:43.612158 | orchestrator | 2025-03-23 13:25:43 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:43.613608 | orchestrator | 2025-03-23 13:25:43 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:46.652664 | orchestrator | 2025-03-23 13:25:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:46.652782 | orchestrator | 2025-03-23 13:25:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:46.655016 | orchestrator | 2025-03-23 13:25:46 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:46.657351 | orchestrator | 2025-03-23 13:25:46 | INFO  | Task ed8d03cf-0265-481a-8699-f4f9f9c292d4 is in state SUCCESS 2025-03-23 13:25:46.661851 | orchestrator | 2025-03-23 13:25:46.661895 | orchestrator | 2025-03-23 13:25:46.661912 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:25:46.661928 | orchestrator | 2025-03-23 13:25:46.661943 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:25:46.661959 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:00.781) 0:00:00.781 ********** 2025-03-23 13:25:46.661974 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:25:46.661990 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:25:46.662005 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:25:46.662068 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:25:46.662084 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:25:46.662098 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:25:46.662112 | orchestrator | 2025-03-23 13:25:46.662126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:25:46.662141 | orchestrator | Sunday 23 March 2025 13:24:12 +0000 (0:00:00.859) 0:00:01.640 ********** 2025-03-23 13:25:46.662155 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662169 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662183 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662198 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662212 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662242 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 13:25:46.662257 | orchestrator | 2025-03-23 13:25:46.662271 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-23 13:25:46.662286 | orchestrator | 2025-03-23 13:25:46.662300 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-23 13:25:46.662314 | orchestrator | Sunday 23 March 2025 13:24:13 +0000 (0:00:01.306) 0:00:02.946 ********** 2025-03-23 13:25:46.662329 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:25:46.662344 | orchestrator | 2025-03-23 13:25:46.662358 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-23 13:25:46.662398 | orchestrator | Sunday 23 March 2025 13:24:15 +0000 (0:00:02.100) 0:00:05.047 ********** 2025-03-23 13:25:46.662413 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-23 13:25:46.662427 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-23 13:25:46.662441 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-23 13:25:46.662477 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-23 13:25:46.662493 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-23 13:25:46.662509 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-23 13:25:46.662524 | orchestrator | 2025-03-23 13:25:46.662540 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-23 13:25:46.662555 | orchestrator | Sunday 23 March 2025 13:24:17 +0000 (0:00:01.675) 0:00:06.723 ********** 2025-03-23 13:25:46.662570 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-23 13:25:46.662592 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-23 13:25:46.662608 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-23 13:25:46.662624 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-23 13:25:46.662639 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-23 13:25:46.662654 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-23 13:25:46.662669 | orchestrator | 2025-03-23 13:25:46.662684 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-23 13:25:46.662700 | orchestrator | Sunday 23 March 2025 13:24:20 +0000 (0:00:03.173) 0:00:09.896 ********** 2025-03-23 13:25:46.662714 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-23 13:25:46.662730 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:25:46.662746 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-23 13:25:46.662761 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:25:46.662777 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-23 13:25:46.662792 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:25:46.662808 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-23 13:25:46.662823 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:25:46.662837 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-23 13:25:46.662851 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:25:46.662865 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-23 13:25:46.662878 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:25:46.662892 | orchestrator | 2025-03-23 13:25:46.662906 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-23 13:25:46.662920 | orchestrator | Sunday 23 March 2025 13:24:23 +0000 (0:00:02.679) 0:00:12.576 ********** 2025-03-23 13:25:46.662934 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:25:46.662947 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:25:46.662961 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:25:46.662975 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:25:46.662989 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:25:46.663003 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:25:46.663017 | orchestrator | 2025-03-23 13:25:46.663031 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-23 13:25:46.663045 | orchestrator | Sunday 23 March 2025 13:24:24 +0000 (0:00:01.541) 0:00:14.117 ********** 2025-03-23 13:25:46.663076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663296 | orchestrator | 2025-03-23 13:25:46.663310 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-23 13:25:46.663324 | orchestrator | Sunday 23 March 2025 13:24:28 +0000 (0:00:03.828) 0:00:17.945 ********** 2025-03-23 13:25:46.663339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.663679 | orchestrator | 2025-03-23 13:25:46.663693 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-23 13:25:46.663708 | orchestrator | Sunday 23 March 2025 13:24:35 +0000 (0:00:06.652) 0:00:24.597 ********** 2025-03-23 13:25:46.663722 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:25:46.663736 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:25:46.663750 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:25:46.663764 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:25:46.663778 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:25:46.663792 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:25:46.663805 | orchestrator | 2025-03-23 13:25:46.663848 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-23 13:25:46.663865 | orchestrator | Sunday 23 March 2025 13:24:40 +0000 (0:00:05.175) 0:00:29.773 ********** 2025-03-23 13:25:46.663879 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:25:46.663893 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:25:46.663906 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:25:46.663920 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:25:46.663934 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:25:46.663948 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:25:46.663962 | orchestrator | 2025-03-23 13:25:46.663975 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-23 13:25:46.663989 | orchestrator | Sunday 23 March 2025 13:24:44 +0000 (0:00:03.733) 0:00:33.506 ********** 2025-03-23 13:25:46.664003 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:25:46.664016 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:25:46.664030 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:25:46.664044 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:25:46.664058 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:25:46.664071 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:25:46.664085 | orchestrator | 2025-03-23 13:25:46.664099 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-23 13:25:46.664113 | orchestrator | Sunday 23 March 2025 13:24:47 +0000 (0:00:03.437) 0:00:36.944 ********** 2025-03-23 13:25:46.664127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 13:25:46.664368 | orchestrator | 2025-03-23 13:25:46.664382 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664396 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:03.607) 0:00:40.552 ********** 2025-03-23 13:25:46.664410 | orchestrator | 2025-03-23 13:25:46.664424 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664438 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:00.129) 0:00:40.681 ********** 2025-03-23 13:25:46.664452 | orchestrator | 2025-03-23 13:25:46.664523 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664538 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:00.348) 0:00:41.030 ********** 2025-03-23 13:25:46.664551 | orchestrator | 2025-03-23 13:25:46.664565 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664579 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:00.154) 0:00:41.184 ********** 2025-03-23 13:25:46.664593 | orchestrator | 2025-03-23 13:25:46.664612 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664626 | orchestrator | Sunday 23 March 2025 13:24:52 +0000 (0:00:00.315) 0:00:41.500 ********** 2025-03-23 13:25:46.664640 | orchestrator | 2025-03-23 13:25:46.664654 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 13:25:46.664668 | orchestrator | Sunday 23 March 2025 13:24:52 +0000 (0:00:00.185) 0:00:41.686 ********** 2025-03-23 13:25:46.664681 | orchestrator | 2025-03-23 13:25:46.664695 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-23 13:25:46.664709 | orchestrator | Sunday 23 March 2025 13:24:52 +0000 (0:00:00.361) 0:00:42.047 ********** 2025-03-23 13:25:46.664723 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:25:46.664737 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:25:46.664751 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:25:46.664765 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:25:46.664779 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:25:46.664793 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:25:46.664806 | orchestrator | 2025-03-23 13:25:46.664820 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-23 13:25:46.664835 | orchestrator | Sunday 23 March 2025 13:25:04 +0000 (0:00:12.252) 0:00:54.299 ********** 2025-03-23 13:25:46.664855 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:25:46.664870 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:25:46.664884 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:25:46.664898 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:25:46.664912 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:25:46.664926 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:25:46.664940 | orchestrator | 2025-03-23 13:25:46.664954 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-23 13:25:46.664968 | orchestrator | Sunday 23 March 2025 13:25:08 +0000 (0:00:03.341) 0:00:57.641 ********** 2025-03-23 13:25:46.664982 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:25:46.664996 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:25:46.665010 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:25:46.665032 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:25:46.665047 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:25:46.665061 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:25:46.665075 | orchestrator | 2025-03-23 13:25:46.665089 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-23 13:25:46.665103 | orchestrator | Sunday 23 March 2025 13:25:18 +0000 (0:00:10.759) 0:01:08.400 ********** 2025-03-23 13:25:46.665117 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-23 13:25:46.665131 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-23 13:25:46.665145 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-23 13:25:46.665166 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-23 13:25:46.665181 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-23 13:25:46.665194 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-23 13:25:46.665208 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-23 13:25:46.665222 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-23 13:25:46.665236 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-23 13:25:46.665250 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-23 13:25:46.665263 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-23 13:25:46.665277 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-23 13:25:46.665291 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665309 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665324 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665338 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665351 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665365 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 13:25:46.665378 | orchestrator | 2025-03-23 13:25:46.665392 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-23 13:25:46.665406 | orchestrator | Sunday 23 March 2025 13:25:29 +0000 (0:00:10.681) 0:01:19.082 ********** 2025-03-23 13:25:46.665420 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-23 13:25:46.665434 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:25:46.665448 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-23 13:25:46.665477 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:25:46.665492 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-23 13:25:46.665506 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:25:46.665520 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-23 13:25:46.665535 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-23 13:25:46.665548 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-23 13:25:46.665562 | orchestrator | 2025-03-23 13:25:46.665576 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-23 13:25:46.665590 | orchestrator | Sunday 23 March 2025 13:25:31 +0000 (0:00:02.163) 0:01:21.245 ********** 2025-03-23 13:25:46.665604 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-23 13:25:46.665619 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:25:46.665633 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-23 13:25:46.665647 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:25:46.665661 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-23 13:25:46.665675 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:25:46.665689 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-23 13:25:46.665721 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-23 13:25:46.665831 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-23 13:25:46.665850 | orchestrator | 2025-03-23 13:25:46.665864 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-23 13:25:46.665878 | orchestrator | Sunday 23 March 2025 13:25:35 +0000 (0:00:04.155) 0:01:25.400 ********** 2025-03-23 13:25:46.665892 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:25:46.665906 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:25:46.665920 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:25:46.665934 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:25:46.665948 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:25:46.665962 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:25:46.665976 | orchestrator | 2025-03-23 13:25:46.665990 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:25:46.666004 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:25:46.666050 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:25:46.666067 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:25:46.666081 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:25:46.666095 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:25:46.666115 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:25:46.666129 | orchestrator | 2025-03-23 13:25:46.666143 | orchestrator | 2025-03-23 13:25:46.666157 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:25:46.666171 | orchestrator | Sunday 23 March 2025 13:25:45 +0000 (0:00:09.755) 0:01:35.156 ********** 2025-03-23 13:25:46.666185 | orchestrator | =============================================================================== 2025-03-23 13:25:46.666199 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.52s 2025-03-23 13:25:46.666213 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.25s 2025-03-23 13:25:46.666226 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.68s 2025-03-23 13:25:46.666240 | orchestrator | openvswitch : Copying over config.json files for services --------------- 6.65s 2025-03-23 13:25:46.666254 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 5.18s 2025-03-23 13:25:46.666268 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.16s 2025-03-23 13:25:46.666282 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.83s 2025-03-23 13:25:46.666296 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.73s 2025-03-23 13:25:46.666310 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.61s 2025-03-23 13:25:46.666323 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.44s 2025-03-23 13:25:46.666342 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.34s 2025-03-23 13:25:46.666356 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.17s 2025-03-23 13:25:46.666370 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.68s 2025-03-23 13:25:46.666384 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.16s 2025-03-23 13:25:46.666398 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.10s 2025-03-23 13:25:46.666419 | orchestrator | module-load : Load modules ---------------------------------------------- 1.68s 2025-03-23 13:25:46.666433 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.54s 2025-03-23 13:25:46.666447 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.49s 2025-03-23 13:25:46.666477 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2025-03-23 13:25:46.666494 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-03-23 13:25:46.666509 | orchestrator | 2025-03-23 13:25:46 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:46.666530 | orchestrator | 2025-03-23 13:25:46 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:49.713936 | orchestrator | 2025-03-23 13:25:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:49.714106 | orchestrator | 2025-03-23 13:25:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:49.715266 | orchestrator | 2025-03-23 13:25:49 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:49.718689 | orchestrator | 2025-03-23 13:25:49 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:25:49.726222 | orchestrator | 2025-03-23 13:25:49 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:49.727910 | orchestrator | 2025-03-23 13:25:49 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:49.730455 | orchestrator | 2025-03-23 13:25:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:52.765000 | orchestrator | 2025-03-23 13:25:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:52.765426 | orchestrator | 2025-03-23 13:25:52 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:52.772639 | orchestrator | 2025-03-23 13:25:52 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:25:52.773272 | orchestrator | 2025-03-23 13:25:52 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:52.774183 | orchestrator | 2025-03-23 13:25:52 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:55.814541 | orchestrator | 2025-03-23 13:25:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:55.814658 | orchestrator | 2025-03-23 13:25:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:55.815001 | orchestrator | 2025-03-23 13:25:55 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:55.816427 | orchestrator | 2025-03-23 13:25:55 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:25:55.818795 | orchestrator | 2025-03-23 13:25:55 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:55.820844 | orchestrator | 2025-03-23 13:25:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:25:58.881203 | orchestrator | 2025-03-23 13:25:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:25:58.881327 | orchestrator | 2025-03-23 13:25:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:25:58.883690 | orchestrator | 2025-03-23 13:25:58 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:25:58.886242 | orchestrator | 2025-03-23 13:25:58 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:25:58.888044 | orchestrator | 2025-03-23 13:25:58 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:25:58.889820 | orchestrator | 2025-03-23 13:25:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:01.939405 | orchestrator | 2025-03-23 13:25:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:01.939537 | orchestrator | 2025-03-23 13:26:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:01.939913 | orchestrator | 2025-03-23 13:26:01 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:01.940426 | orchestrator | 2025-03-23 13:26:01 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:01.941389 | orchestrator | 2025-03-23 13:26:01 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:01.942181 | orchestrator | 2025-03-23 13:26:01 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:04.986226 | orchestrator | 2025-03-23 13:26:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:04.986329 | orchestrator | 2025-03-23 13:26:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:04.987138 | orchestrator | 2025-03-23 13:26:04 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:04.987912 | orchestrator | 2025-03-23 13:26:04 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:04.987943 | orchestrator | 2025-03-23 13:26:04 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:04.988445 | orchestrator | 2025-03-23 13:26:04 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:08.034358 | orchestrator | 2025-03-23 13:26:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:08.034540 | orchestrator | 2025-03-23 13:26:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:08.036272 | orchestrator | 2025-03-23 13:26:08 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:08.038628 | orchestrator | 2025-03-23 13:26:08 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:08.040716 | orchestrator | 2025-03-23 13:26:08 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:08.045545 | orchestrator | 2025-03-23 13:26:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:11.095762 | orchestrator | 2025-03-23 13:26:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:11.095860 | orchestrator | 2025-03-23 13:26:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:11.097904 | orchestrator | 2025-03-23 13:26:11 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:11.098627 | orchestrator | 2025-03-23 13:26:11 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:11.099600 | orchestrator | 2025-03-23 13:26:11 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:11.103336 | orchestrator | 2025-03-23 13:26:11 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:11.103427 | orchestrator | 2025-03-23 13:26:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:14.164457 | orchestrator | 2025-03-23 13:26:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:14.166610 | orchestrator | 2025-03-23 13:26:14 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:14.170507 | orchestrator | 2025-03-23 13:26:14 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:14.172553 | orchestrator | 2025-03-23 13:26:14 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:14.174343 | orchestrator | 2025-03-23 13:26:14 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:17.223057 | orchestrator | 2025-03-23 13:26:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:17.223177 | orchestrator | 2025-03-23 13:26:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:17.227194 | orchestrator | 2025-03-23 13:26:17 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:17.231033 | orchestrator | 2025-03-23 13:26:17 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:17.231081 | orchestrator | 2025-03-23 13:26:17 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:17.232976 | orchestrator | 2025-03-23 13:26:17 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:17.233051 | orchestrator | 2025-03-23 13:26:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:20.281832 | orchestrator | 2025-03-23 13:26:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:20.282826 | orchestrator | 2025-03-23 13:26:20 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:20.283876 | orchestrator | 2025-03-23 13:26:20 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:20.285763 | orchestrator | 2025-03-23 13:26:20 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:20.286774 | orchestrator | 2025-03-23 13:26:20 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:23.351582 | orchestrator | 2025-03-23 13:26:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:23.351735 | orchestrator | 2025-03-23 13:26:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:23.353106 | orchestrator | 2025-03-23 13:26:23 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:23.354542 | orchestrator | 2025-03-23 13:26:23 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:23.356400 | orchestrator | 2025-03-23 13:26:23 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:23.357924 | orchestrator | 2025-03-23 13:26:23 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:26.409451 | orchestrator | 2025-03-23 13:26:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:26.409608 | orchestrator | 2025-03-23 13:26:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:26.411806 | orchestrator | 2025-03-23 13:26:26 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:26.413196 | orchestrator | 2025-03-23 13:26:26 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:26.414636 | orchestrator | 2025-03-23 13:26:26 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:26.415927 | orchestrator | 2025-03-23 13:26:26 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:26.416142 | orchestrator | 2025-03-23 13:26:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:29.465410 | orchestrator | 2025-03-23 13:26:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:29.466343 | orchestrator | 2025-03-23 13:26:29 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:29.470750 | orchestrator | 2025-03-23 13:26:29 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:29.474373 | orchestrator | 2025-03-23 13:26:29 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:29.476966 | orchestrator | 2025-03-23 13:26:29 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:32.538276 | orchestrator | 2025-03-23 13:26:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:32.538418 | orchestrator | 2025-03-23 13:26:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:32.540890 | orchestrator | 2025-03-23 13:26:32 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:32.542096 | orchestrator | 2025-03-23 13:26:32 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:32.543415 | orchestrator | 2025-03-23 13:26:32 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:32.545875 | orchestrator | 2025-03-23 13:26:32 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:32.546157 | orchestrator | 2025-03-23 13:26:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:35.593944 | orchestrator | 2025-03-23 13:26:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:35.594575 | orchestrator | 2025-03-23 13:26:35 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:35.595502 | orchestrator | 2025-03-23 13:26:35 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:35.596928 | orchestrator | 2025-03-23 13:26:35 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:35.599386 | orchestrator | 2025-03-23 13:26:35 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:38.659702 | orchestrator | 2025-03-23 13:26:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:38.659818 | orchestrator | 2025-03-23 13:26:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:38.664374 | orchestrator | 2025-03-23 13:26:38 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:38.668386 | orchestrator | 2025-03-23 13:26:38 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:38.673297 | orchestrator | 2025-03-23 13:26:38 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:38.682743 | orchestrator | 2025-03-23 13:26:38 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:38.683975 | orchestrator | 2025-03-23 13:26:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:41.756566 | orchestrator | 2025-03-23 13:26:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:41.760326 | orchestrator | 2025-03-23 13:26:41 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:41.763416 | orchestrator | 2025-03-23 13:26:41 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:41.768656 | orchestrator | 2025-03-23 13:26:41 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:41.773808 | orchestrator | 2025-03-23 13:26:41 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:44.828355 | orchestrator | 2025-03-23 13:26:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:44.828462 | orchestrator | 2025-03-23 13:26:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:44.830209 | orchestrator | 2025-03-23 13:26:44 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:44.831310 | orchestrator | 2025-03-23 13:26:44 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:44.831339 | orchestrator | 2025-03-23 13:26:44 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:44.833269 | orchestrator | 2025-03-23 13:26:44 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:44.833826 | orchestrator | 2025-03-23 13:26:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:47.872470 | orchestrator | 2025-03-23 13:26:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:47.872948 | orchestrator | 2025-03-23 13:26:47 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:47.873799 | orchestrator | 2025-03-23 13:26:47 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:47.874553 | orchestrator | 2025-03-23 13:26:47 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:47.875316 | orchestrator | 2025-03-23 13:26:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:50.912006 | orchestrator | 2025-03-23 13:26:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:50.912120 | orchestrator | 2025-03-23 13:26:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:50.913830 | orchestrator | 2025-03-23 13:26:50 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:50.915837 | orchestrator | 2025-03-23 13:26:50 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:50.918238 | orchestrator | 2025-03-23 13:26:50 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:50.918355 | orchestrator | 2025-03-23 13:26:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:50.918391 | orchestrator | 2025-03-23 13:26:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:53.954329 | orchestrator | 2025-03-23 13:26:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:53.954683 | orchestrator | 2025-03-23 13:26:53 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:53.955714 | orchestrator | 2025-03-23 13:26:53 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:53.957774 | orchestrator | 2025-03-23 13:26:53 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:53.959590 | orchestrator | 2025-03-23 13:26:53 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:26:57.018916 | orchestrator | 2025-03-23 13:26:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:26:57.019047 | orchestrator | 2025-03-23 13:26:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:26:57.020130 | orchestrator | 2025-03-23 13:26:57 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:26:57.020185 | orchestrator | 2025-03-23 13:26:57 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:26:57.022407 | orchestrator | 2025-03-23 13:26:57 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:26:57.023289 | orchestrator | 2025-03-23 13:26:57 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:00.079218 | orchestrator | 2025-03-23 13:26:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:00.079324 | orchestrator | 2025-03-23 13:27:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:00.082459 | orchestrator | 2025-03-23 13:27:00 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:00.083459 | orchestrator | 2025-03-23 13:27:00 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:00.086302 | orchestrator | 2025-03-23 13:27:00 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:00.087952 | orchestrator | 2025-03-23 13:27:00 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:03.157559 | orchestrator | 2025-03-23 13:27:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:03.157676 | orchestrator | 2025-03-23 13:27:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:03.158249 | orchestrator | 2025-03-23 13:27:03 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:03.159387 | orchestrator | 2025-03-23 13:27:03 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:03.160513 | orchestrator | 2025-03-23 13:27:03 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:03.161706 | orchestrator | 2025-03-23 13:27:03 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:03.161843 | orchestrator | 2025-03-23 13:27:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:06.215124 | orchestrator | 2025-03-23 13:27:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:06.215690 | orchestrator | 2025-03-23 13:27:06 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:06.219680 | orchestrator | 2025-03-23 13:27:06 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:06.225363 | orchestrator | 2025-03-23 13:27:06 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:06.228606 | orchestrator | 2025-03-23 13:27:06 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:09.304449 | orchestrator | 2025-03-23 13:27:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:09.304613 | orchestrator | 2025-03-23 13:27:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:09.306278 | orchestrator | 2025-03-23 13:27:09 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:09.306310 | orchestrator | 2025-03-23 13:27:09 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:09.312165 | orchestrator | 2025-03-23 13:27:09 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:09.316271 | orchestrator | 2025-03-23 13:27:09 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:12.380682 | orchestrator | 2025-03-23 13:27:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:12.380805 | orchestrator | 2025-03-23 13:27:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:12.381588 | orchestrator | 2025-03-23 13:27:12 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:12.387406 | orchestrator | 2025-03-23 13:27:12 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:15.441874 | orchestrator | 2025-03-23 13:27:12 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:15.441977 | orchestrator | 2025-03-23 13:27:12 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:15.441994 | orchestrator | 2025-03-23 13:27:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:15.442078 | orchestrator | 2025-03-23 13:27:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:15.443538 | orchestrator | 2025-03-23 13:27:15 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:15.444656 | orchestrator | 2025-03-23 13:27:15 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:15.446824 | orchestrator | 2025-03-23 13:27:15 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state STARTED 2025-03-23 13:27:15.447759 | orchestrator | 2025-03-23 13:27:15 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:18.489262 | orchestrator | 2025-03-23 13:27:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:18.489388 | orchestrator | 2025-03-23 13:27:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:18.490313 | orchestrator | 2025-03-23 13:27:18 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:18.490919 | orchestrator | 2025-03-23 13:27:18 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:18.491914 | orchestrator | 2025-03-23 13:27:18 | INFO  | Task b77fa585-21e7-4720-b6ae-094bebba6d96 is in state SUCCESS 2025-03-23 13:27:18.493154 | orchestrator | 2025-03-23 13:27:18.493186 | orchestrator | 2025-03-23 13:27:18.493201 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-23 13:27:18.493216 | orchestrator | 2025-03-23 13:27:18.493230 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-23 13:27:18.493245 | orchestrator | Sunday 23 March 2025 13:24:41 +0000 (0:00:00.270) 0:00:00.270 ********** 2025-03-23 13:27:18.493259 | orchestrator | ok: [localhost] => { 2025-03-23 13:27:18.493276 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-23 13:27:18.493290 | orchestrator | } 2025-03-23 13:27:18.493305 | orchestrator | 2025-03-23 13:27:18.493318 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-23 13:27:18.493332 | orchestrator | Sunday 23 March 2025 13:24:41 +0000 (0:00:00.139) 0:00:00.410 ********** 2025-03-23 13:27:18.493347 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-23 13:27:18.493362 | orchestrator | ...ignoring 2025-03-23 13:27:18.493376 | orchestrator | 2025-03-23 13:27:18.493390 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-23 13:27:18.493404 | orchestrator | Sunday 23 March 2025 13:24:45 +0000 (0:00:03.459) 0:00:03.870 ********** 2025-03-23 13:27:18.493418 | orchestrator | skipping: [localhost] 2025-03-23 13:27:18.493432 | orchestrator | 2025-03-23 13:27:18.493446 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-23 13:27:18.493461 | orchestrator | Sunday 23 March 2025 13:24:45 +0000 (0:00:00.325) 0:00:04.196 ********** 2025-03-23 13:27:18.493475 | orchestrator | ok: [localhost] 2025-03-23 13:27:18.493536 | orchestrator | 2025-03-23 13:27:18.493551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:27:18.493565 | orchestrator | 2025-03-23 13:27:18.493579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:27:18.493593 | orchestrator | Sunday 23 March 2025 13:24:46 +0000 (0:00:00.828) 0:00:05.025 ********** 2025-03-23 13:27:18.493607 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:27:18.493621 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:27:18.493635 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:27:18.493648 | orchestrator | 2025-03-23 13:27:18.493662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:27:18.493676 | orchestrator | Sunday 23 March 2025 13:24:47 +0000 (0:00:01.081) 0:00:06.106 ********** 2025-03-23 13:27:18.493690 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-23 13:27:18.493704 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-23 13:27:18.493717 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-23 13:27:18.493731 | orchestrator | 2025-03-23 13:27:18.493833 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-23 13:27:18.493850 | orchestrator | 2025-03-23 13:27:18.493866 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 13:27:18.493882 | orchestrator | Sunday 23 March 2025 13:24:48 +0000 (0:00:00.749) 0:00:06.855 ********** 2025-03-23 13:27:18.493897 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:27:18.493913 | orchestrator | 2025-03-23 13:27:18.493928 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-23 13:27:18.493943 | orchestrator | Sunday 23 March 2025 13:24:50 +0000 (0:00:02.038) 0:00:08.893 ********** 2025-03-23 13:27:18.493958 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:27:18.493974 | orchestrator | 2025-03-23 13:27:18.493989 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-23 13:27:18.494004 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:01.268) 0:00:10.161 ********** 2025-03-23 13:27:18.494064 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494083 | orchestrator | 2025-03-23 13:27:18.494099 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-23 13:27:18.494120 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:00.370) 0:00:10.531 ********** 2025-03-23 13:27:18.494135 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494149 | orchestrator | 2025-03-23 13:27:18.494163 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-23 13:27:18.494176 | orchestrator | Sunday 23 March 2025 13:24:52 +0000 (0:00:00.692) 0:00:11.224 ********** 2025-03-23 13:27:18.494190 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494204 | orchestrator | 2025-03-23 13:27:18.494218 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-23 13:27:18.494231 | orchestrator | Sunday 23 March 2025 13:24:53 +0000 (0:00:00.726) 0:00:11.951 ********** 2025-03-23 13:27:18.494245 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494259 | orchestrator | 2025-03-23 13:27:18.494273 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 13:27:18.494287 | orchestrator | Sunday 23 March 2025 13:24:54 +0000 (0:00:00.902) 0:00:12.853 ********** 2025-03-23 13:27:18.494301 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:27:18.494314 | orchestrator | 2025-03-23 13:27:18.494328 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-23 13:27:18.494342 | orchestrator | Sunday 23 March 2025 13:24:55 +0000 (0:00:01.836) 0:00:14.690 ********** 2025-03-23 13:27:18.494356 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:27:18.494370 | orchestrator | 2025-03-23 13:27:18.494384 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-23 13:27:18.494398 | orchestrator | Sunday 23 March 2025 13:24:56 +0000 (0:00:00.949) 0:00:15.639 ********** 2025-03-23 13:27:18.494428 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494442 | orchestrator | 2025-03-23 13:27:18.494456 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-23 13:27:18.494470 | orchestrator | Sunday 23 March 2025 13:24:57 +0000 (0:00:00.503) 0:00:16.142 ********** 2025-03-23 13:27:18.494484 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.494526 | orchestrator | 2025-03-23 13:27:18.494550 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-23 13:27:18.494565 | orchestrator | Sunday 23 March 2025 13:24:57 +0000 (0:00:00.600) 0:00:16.743 ********** 2025-03-23 13:27:18.494581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494630 | orchestrator | 2025-03-23 13:27:18.494644 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-23 13:27:18.494666 | orchestrator | Sunday 23 March 2025 13:24:59 +0000 (0:00:01.221) 0:00:17.965 ********** 2025-03-23 13:27:18.494691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.494737 | orchestrator | 2025-03-23 13:27:18.494751 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-23 13:27:18.494766 | orchestrator | Sunday 23 March 2025 13:25:01 +0000 (0:00:01.991) 0:00:19.956 ********** 2025-03-23 13:27:18.494779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 13:27:18.494794 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 13:27:18.494808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 13:27:18.494829 | orchestrator | 2025-03-23 13:27:18.494843 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-23 13:27:18.494857 | orchestrator | Sunday 23 March 2025 13:25:03 +0000 (0:00:02.392) 0:00:22.349 ********** 2025-03-23 13:27:18.494871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 13:27:18.494885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 13:27:18.494899 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 13:27:18.494912 | orchestrator | 2025-03-23 13:27:18.494926 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-23 13:27:18.494941 | orchestrator | Sunday 23 March 2025 13:25:07 +0000 (0:00:03.631) 0:00:25.980 ********** 2025-03-23 13:27:18.494955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 13:27:18.494968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 13:27:18.494987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 13:27:18.495002 | orchestrator | 2025-03-23 13:27:18.495022 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-23 13:27:18.495037 | orchestrator | Sunday 23 March 2025 13:25:09 +0000 (0:00:02.728) 0:00:28.708 ********** 2025-03-23 13:27:18.495051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 13:27:18.495065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 13:27:18.495079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 13:27:18.495093 | orchestrator | 2025-03-23 13:27:18.495107 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-23 13:27:18.495121 | orchestrator | Sunday 23 March 2025 13:25:13 +0000 (0:00:03.680) 0:00:32.389 ********** 2025-03-23 13:27:18.495134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 13:27:18.495148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 13:27:18.495162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 13:27:18.495176 | orchestrator | 2025-03-23 13:27:18.495190 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-23 13:27:18.495210 | orchestrator | Sunday 23 March 2025 13:25:15 +0000 (0:00:02.279) 0:00:34.668 ********** 2025-03-23 13:27:18.495224 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 13:27:18.495238 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 13:27:18.495252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 13:27:18.495266 | orchestrator | 2025-03-23 13:27:18.495280 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 13:27:18.495294 | orchestrator | Sunday 23 March 2025 13:25:18 +0000 (0:00:02.399) 0:00:37.068 ********** 2025-03-23 13:27:18.495308 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.495322 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:27:18.495337 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:27:18.495351 | orchestrator | 2025-03-23 13:27:18.495365 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-23 13:27:18.495379 | orchestrator | Sunday 23 March 2025 13:25:19 +0000 (0:00:01.520) 0:00:38.588 ********** 2025-03-23 13:27:18.495393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.495415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.495440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:27:18.495455 | orchestrator | 2025-03-23 13:27:18.495469 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-23 13:27:18.495483 | orchestrator | Sunday 23 March 2025 13:25:22 +0000 (0:00:02.931) 0:00:41.520 ********** 2025-03-23 13:27:18.495521 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:27:18.495535 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:27:18.495549 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:27:18.495562 | orchestrator | 2025-03-23 13:27:18.495576 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-23 13:27:18.495590 | orchestrator | Sunday 23 March 2025 13:25:23 +0000 (0:00:01.185) 0:00:42.705 ********** 2025-03-23 13:27:18.495604 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:27:18.495617 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:27:18.495631 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:27:18.495645 | orchestrator | 2025-03-23 13:27:18.495658 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-23 13:27:18.495680 | orchestrator | Sunday 23 March 2025 13:25:31 +0000 (0:00:07.565) 0:00:50.271 ********** 2025-03-23 13:27:18.495694 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:27:18.495708 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:27:18.495721 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:27:18.495735 | orchestrator | 2025-03-23 13:27:18.495749 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 13:27:18.495762 | orchestrator | 2025-03-23 13:27:18.495777 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 13:27:18.495790 | orchestrator | Sunday 23 March 2025 13:25:31 +0000 (0:00:00.360) 0:00:50.632 ********** 2025-03-23 13:27:18.495804 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:27:18.495818 | orchestrator | 2025-03-23 13:27:18.495832 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 13:27:18.495845 | orchestrator | Sunday 23 March 2025 13:25:32 +0000 (0:00:00.720) 0:00:51.353 ********** 2025-03-23 13:27:18.495859 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:27:18.495873 | orchestrator | 2025-03-23 13:27:18.495887 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 13:27:18.495900 | orchestrator | Sunday 23 March 2025 13:25:32 +0000 (0:00:00.227) 0:00:51.581 ********** 2025-03-23 13:27:18.495914 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:27:18.495928 | orchestrator | 2025-03-23 13:27:18.495941 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 13:27:18.495955 | orchestrator | Sunday 23 March 2025 13:25:34 +0000 (0:00:01.850) 0:00:53.431 ********** 2025-03-23 13:27:18.495969 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:27:18.495982 | orchestrator | 2025-03-23 13:27:18.495996 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 13:27:18.496010 | orchestrator | 2025-03-23 13:27:18.496024 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 13:27:18.496038 | orchestrator | Sunday 23 March 2025 13:26:30 +0000 (0:00:55.908) 0:01:49.340 ********** 2025-03-23 13:27:18.496052 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:27:18.496065 | orchestrator | 2025-03-23 13:27:18.496079 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 13:27:18.496093 | orchestrator | Sunday 23 March 2025 13:26:31 +0000 (0:00:00.665) 0:01:50.005 ********** 2025-03-23 13:27:18.496106 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:27:18.496120 | orchestrator | 2025-03-23 13:27:18.496134 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 13:27:18.496148 | orchestrator | Sunday 23 March 2025 13:26:31 +0000 (0:00:00.299) 0:01:50.305 ********** 2025-03-23 13:27:18.496161 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:27:18.496175 | orchestrator | 2025-03-23 13:27:18.496189 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 13:27:18.496203 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:01.913) 0:01:52.218 ********** 2025-03-23 13:27:18.496217 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:27:18.496230 | orchestrator | 2025-03-23 13:27:18.496244 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 13:27:18.496258 | orchestrator | 2025-03-23 13:27:18.496272 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 13:27:18.496286 | orchestrator | Sunday 23 March 2025 13:26:48 +0000 (0:00:15.392) 0:02:07.610 ********** 2025-03-23 13:27:18.496299 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:27:18.496313 | orchestrator | 2025-03-23 13:27:18.496332 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 13:27:18.496346 | orchestrator | Sunday 23 March 2025 13:26:49 +0000 (0:00:00.648) 0:02:08.259 ********** 2025-03-23 13:27:18.496360 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:27:18.496379 | orchestrator | 2025-03-23 13:27:18.496393 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 13:27:18.496412 | orchestrator | Sunday 23 March 2025 13:26:49 +0000 (0:00:00.322) 0:02:08.582 ********** 2025-03-23 13:27:18.498356 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:27:18.498384 | orchestrator | 2025-03-23 13:27:18.498397 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 13:27:18.498409 | orchestrator | Sunday 23 March 2025 13:26:52 +0000 (0:00:02.397) 0:02:10.979 ********** 2025-03-23 13:27:18.498421 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:27:18.498434 | orchestrator | 2025-03-23 13:27:18.498446 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-23 13:27:18.498458 | orchestrator | 2025-03-23 13:27:18.498471 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-23 13:27:18.498483 | orchestrator | Sunday 23 March 2025 13:27:08 +0000 (0:00:16.502) 0:02:27.482 ********** 2025-03-23 13:27:18.498515 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:27:18.498528 | orchestrator | 2025-03-23 13:27:18.498541 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-23 13:27:18.498553 | orchestrator | Sunday 23 March 2025 13:27:10 +0000 (0:00:02.238) 0:02:29.720 ********** 2025-03-23 13:27:18.498566 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-23 13:27:18.498578 | orchestrator | enable_outward_rabbitmq_True 2025-03-23 13:27:18.498591 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-23 13:27:18.498603 | orchestrator | outward_rabbitmq_restart 2025-03-23 13:27:18.498616 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:27:18.498628 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:27:18.498640 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:27:18.498653 | orchestrator | 2025-03-23 13:27:18.498665 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-23 13:27:18.498677 | orchestrator | skipping: no hosts matched 2025-03-23 13:27:18.498690 | orchestrator | 2025-03-23 13:27:18.498702 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-23 13:27:18.498715 | orchestrator | skipping: no hosts matched 2025-03-23 13:27:18.498727 | orchestrator | 2025-03-23 13:27:18.498739 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-23 13:27:18.498753 | orchestrator | skipping: no hosts matched 2025-03-23 13:27:18.498767 | orchestrator | 2025-03-23 13:27:18.498781 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:27:18.498795 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-23 13:27:18.498810 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-23 13:27:18.498824 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:27:18.498838 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:27:18.498852 | orchestrator | 2025-03-23 13:27:18.498866 | orchestrator | 2025-03-23 13:27:18.498880 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:27:18.498894 | orchestrator | Sunday 23 March 2025 13:27:14 +0000 (0:00:03.838) 0:02:33.558 ********** 2025-03-23 13:27:18.498908 | orchestrator | =============================================================================== 2025-03-23 13:27:18.498922 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.80s 2025-03-23 13:27:18.498936 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.57s 2025-03-23 13:27:18.498950 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.16s 2025-03-23 13:27:18.498964 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.84s 2025-03-23 13:27:18.498978 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.68s 2025-03-23 13:27:18.499005 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.63s 2025-03-23 13:27:18.499020 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.46s 2025-03-23 13:27:18.499036 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.93s 2025-03-23 13:27:18.499051 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.73s 2025-03-23 13:27:18.499067 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.40s 2025-03-23 13:27:18.499082 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.39s 2025-03-23 13:27:18.499097 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.28s 2025-03-23 13:27:18.499113 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 2.24s 2025-03-23 13:27:18.499134 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.04s 2025-03-23 13:27:18.499150 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.04s 2025-03-23 13:27:18.499165 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.99s 2025-03-23 13:27:18.499181 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.84s 2025-03-23 13:27:18.499196 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.52s 2025-03-23 13:27:18.499211 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2025-03-23 13:27:18.499226 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.22s 2025-03-23 13:27:18.499247 | orchestrator | 2025-03-23 13:27:18 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:21.560626 | orchestrator | 2025-03-23 13:27:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:21.560742 | orchestrator | 2025-03-23 13:27:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:24.601113 | orchestrator | 2025-03-23 13:27:21 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:24.601220 | orchestrator | 2025-03-23 13:27:21 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:24.601239 | orchestrator | 2025-03-23 13:27:21 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:24.601255 | orchestrator | 2025-03-23 13:27:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:24.601286 | orchestrator | 2025-03-23 13:27:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:24.604100 | orchestrator | 2025-03-23 13:27:24 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:24.608197 | orchestrator | 2025-03-23 13:27:24 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:24.610229 | orchestrator | 2025-03-23 13:27:24 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:24.610461 | orchestrator | 2025-03-23 13:27:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:27.664605 | orchestrator | 2025-03-23 13:27:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:27.665302 | orchestrator | 2025-03-23 13:27:27 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:27.667068 | orchestrator | 2025-03-23 13:27:27 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:27.668009 | orchestrator | 2025-03-23 13:27:27 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:30.729963 | orchestrator | 2025-03-23 13:27:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:30.730142 | orchestrator | 2025-03-23 13:27:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:30.733250 | orchestrator | 2025-03-23 13:27:30 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:30.736841 | orchestrator | 2025-03-23 13:27:30 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:30.739242 | orchestrator | 2025-03-23 13:27:30 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:33.791355 | orchestrator | 2025-03-23 13:27:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:33.791484 | orchestrator | 2025-03-23 13:27:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:33.800624 | orchestrator | 2025-03-23 13:27:33 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:33.801623 | orchestrator | 2025-03-23 13:27:33 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:33.804006 | orchestrator | 2025-03-23 13:27:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:36.864661 | orchestrator | 2025-03-23 13:27:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:36.864796 | orchestrator | 2025-03-23 13:27:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:39.916481 | orchestrator | 2025-03-23 13:27:36 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:39.916637 | orchestrator | 2025-03-23 13:27:36 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:39.916656 | orchestrator | 2025-03-23 13:27:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:39.916672 | orchestrator | 2025-03-23 13:27:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:39.916703 | orchestrator | 2025-03-23 13:27:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:39.917691 | orchestrator | 2025-03-23 13:27:39 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:39.919332 | orchestrator | 2025-03-23 13:27:39 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:39.921103 | orchestrator | 2025-03-23 13:27:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:42.968378 | orchestrator | 2025-03-23 13:27:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:42.968487 | orchestrator | 2025-03-23 13:27:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:42.970883 | orchestrator | 2025-03-23 13:27:42 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:42.972640 | orchestrator | 2025-03-23 13:27:42 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:42.975633 | orchestrator | 2025-03-23 13:27:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:46.021726 | orchestrator | 2025-03-23 13:27:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:46.021866 | orchestrator | 2025-03-23 13:27:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:46.024841 | orchestrator | 2025-03-23 13:27:46 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:46.028016 | orchestrator | 2025-03-23 13:27:46 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:46.028628 | orchestrator | 2025-03-23 13:27:46 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:49.067745 | orchestrator | 2025-03-23 13:27:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:49.067854 | orchestrator | 2025-03-23 13:27:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:49.071151 | orchestrator | 2025-03-23 13:27:49 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:49.072923 | orchestrator | 2025-03-23 13:27:49 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:49.075449 | orchestrator | 2025-03-23 13:27:49 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:52.124133 | orchestrator | 2025-03-23 13:27:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:52.124244 | orchestrator | 2025-03-23 13:27:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:52.124935 | orchestrator | 2025-03-23 13:27:52 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:52.126434 | orchestrator | 2025-03-23 13:27:52 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:52.127462 | orchestrator | 2025-03-23 13:27:52 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:55.180734 | orchestrator | 2025-03-23 13:27:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:55.180858 | orchestrator | 2025-03-23 13:27:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:55.181719 | orchestrator | 2025-03-23 13:27:55 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:55.181755 | orchestrator | 2025-03-23 13:27:55 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:55.182691 | orchestrator | 2025-03-23 13:27:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:27:58.218312 | orchestrator | 2025-03-23 13:27:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:27:58.218433 | orchestrator | 2025-03-23 13:27:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:27:58.218795 | orchestrator | 2025-03-23 13:27:58 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:27:58.219975 | orchestrator | 2025-03-23 13:27:58 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:27:58.221317 | orchestrator | 2025-03-23 13:27:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:01.284501 | orchestrator | 2025-03-23 13:27:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:01.284658 | orchestrator | 2025-03-23 13:28:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:01.286091 | orchestrator | 2025-03-23 13:28:01 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:01.288815 | orchestrator | 2025-03-23 13:28:01 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:01.291597 | orchestrator | 2025-03-23 13:28:01 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:01.291918 | orchestrator | 2025-03-23 13:28:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:04.349060 | orchestrator | 2025-03-23 13:28:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:04.351638 | orchestrator | 2025-03-23 13:28:04 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:04.354934 | orchestrator | 2025-03-23 13:28:04 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:04.363324 | orchestrator | 2025-03-23 13:28:04 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:07.409872 | orchestrator | 2025-03-23 13:28:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:07.409996 | orchestrator | 2025-03-23 13:28:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:07.411734 | orchestrator | 2025-03-23 13:28:07 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:07.413621 | orchestrator | 2025-03-23 13:28:07 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:07.415593 | orchestrator | 2025-03-23 13:28:07 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:10.481878 | orchestrator | 2025-03-23 13:28:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:10.482008 | orchestrator | 2025-03-23 13:28:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:10.484722 | orchestrator | 2025-03-23 13:28:10 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:10.484757 | orchestrator | 2025-03-23 13:28:10 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:10.489363 | orchestrator | 2025-03-23 13:28:10 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:13.546336 | orchestrator | 2025-03-23 13:28:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:13.546462 | orchestrator | 2025-03-23 13:28:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:13.548867 | orchestrator | 2025-03-23 13:28:13 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:13.551501 | orchestrator | 2025-03-23 13:28:13 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:13.554941 | orchestrator | 2025-03-23 13:28:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:16.617545 | orchestrator | 2025-03-23 13:28:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:16.617669 | orchestrator | 2025-03-23 13:28:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:16.621619 | orchestrator | 2025-03-23 13:28:16 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:16.623173 | orchestrator | 2025-03-23 13:28:16 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:16.628488 | orchestrator | 2025-03-23 13:28:16 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:19.682944 | orchestrator | 2025-03-23 13:28:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:19.683050 | orchestrator | 2025-03-23 13:28:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:19.683301 | orchestrator | 2025-03-23 13:28:19 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:19.684090 | orchestrator | 2025-03-23 13:28:19 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:19.685197 | orchestrator | 2025-03-23 13:28:19 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:22.729416 | orchestrator | 2025-03-23 13:28:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:22.729583 | orchestrator | 2025-03-23 13:28:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:22.730733 | orchestrator | 2025-03-23 13:28:22 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:22.731563 | orchestrator | 2025-03-23 13:28:22 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:22.732268 | orchestrator | 2025-03-23 13:28:22 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:25.772551 | orchestrator | 2025-03-23 13:28:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:25.772669 | orchestrator | 2025-03-23 13:28:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:25.773796 | orchestrator | 2025-03-23 13:28:25 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:25.774643 | orchestrator | 2025-03-23 13:28:25 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:25.775452 | orchestrator | 2025-03-23 13:28:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:25.776301 | orchestrator | 2025-03-23 13:28:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:28.821262 | orchestrator | 2025-03-23 13:28:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:28.824988 | orchestrator | 2025-03-23 13:28:28 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:28.827658 | orchestrator | 2025-03-23 13:28:28 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:31.867088 | orchestrator | 2025-03-23 13:28:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:31.867163 | orchestrator | 2025-03-23 13:28:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:31.867190 | orchestrator | 2025-03-23 13:28:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:31.868653 | orchestrator | 2025-03-23 13:28:31 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:31.870451 | orchestrator | 2025-03-23 13:28:31 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:31.872180 | orchestrator | 2025-03-23 13:28:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:34.944415 | orchestrator | 2025-03-23 13:28:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:34.944602 | orchestrator | 2025-03-23 13:28:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:34.945766 | orchestrator | 2025-03-23 13:28:34 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:34.946608 | orchestrator | 2025-03-23 13:28:34 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:34.949868 | orchestrator | 2025-03-23 13:28:34 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:37.995605 | orchestrator | 2025-03-23 13:28:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:37.995689 | orchestrator | 2025-03-23 13:28:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:37.997644 | orchestrator | 2025-03-23 13:28:37 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:37.998672 | orchestrator | 2025-03-23 13:28:37 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state STARTED 2025-03-23 13:28:38.001595 | orchestrator | 2025-03-23 13:28:38 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:41.065829 | orchestrator | 2025-03-23 13:28:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:41.065945 | orchestrator | 2025-03-23 13:28:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:41.066667 | orchestrator | 2025-03-23 13:28:41 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:41.070730 | orchestrator | 2025-03-23 13:28:41 | INFO  | Task e0d0d0d9-70f1-4e67-b3fa-98046a96bdf3 is in state SUCCESS 2025-03-23 13:28:41.072543 | orchestrator | 2025-03-23 13:28:41.072580 | orchestrator | 2025-03-23 13:28:41.072595 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:28:41.072610 | orchestrator | 2025-03-23 13:28:41.072624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:28:41.072639 | orchestrator | Sunday 23 March 2025 13:25:50 +0000 (0:00:00.479) 0:00:00.479 ********** 2025-03-23 13:28:41.072653 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.072668 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.072683 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.072696 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:28:41.072710 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:28:41.072724 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:28:41.072738 | orchestrator | 2025-03-23 13:28:41.072752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:28:41.072766 | orchestrator | Sunday 23 March 2025 13:25:52 +0000 (0:00:01.243) 0:00:01.722 ********** 2025-03-23 13:28:41.072780 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-23 13:28:41.072794 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-23 13:28:41.072808 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-23 13:28:41.072821 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-23 13:28:41.072835 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-23 13:28:41.072849 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-23 13:28:41.072863 | orchestrator | 2025-03-23 13:28:41.072877 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-23 13:28:41.072890 | orchestrator | 2025-03-23 13:28:41.072904 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-23 13:28:41.072918 | orchestrator | Sunday 23 March 2025 13:25:53 +0000 (0:00:01.527) 0:00:03.249 ********** 2025-03-23 13:28:41.072932 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:28:41.072986 | orchestrator | 2025-03-23 13:28:41.073003 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-23 13:28:41.073017 | orchestrator | Sunday 23 March 2025 13:25:55 +0000 (0:00:01.647) 0:00:04.896 ********** 2025-03-23 13:28:41.073032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073239 | orchestrator | 2025-03-23 13:28:41.073254 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-23 13:28:41.073270 | orchestrator | Sunday 23 March 2025 13:25:57 +0000 (0:00:01.986) 0:00:06.883 ********** 2025-03-23 13:28:41.073301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073405 | orchestrator | 2025-03-23 13:28:41.073421 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-23 13:28:41.073436 | orchestrator | Sunday 23 March 2025 13:25:59 +0000 (0:00:02.659) 0:00:09.542 ********** 2025-03-23 13:28:41.073452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073581 | orchestrator | 2025-03-23 13:28:41.073595 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-23 13:28:41.073610 | orchestrator | Sunday 23 March 2025 13:26:01 +0000 (0:00:01.483) 0:00:11.026 ********** 2025-03-23 13:28:41.073631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073728 | orchestrator | 2025-03-23 13:28:41.073742 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-23 13:28:41.073756 | orchestrator | Sunday 23 March 2025 13:26:03 +0000 (0:00:02.503) 0:00:13.529 ********** 2025-03-23 13:28:41.073770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.073861 | orchestrator | 2025-03-23 13:28:41.073875 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-23 13:28:41.073890 | orchestrator | Sunday 23 March 2025 13:26:07 +0000 (0:00:03.732) 0:00:17.261 ********** 2025-03-23 13:28:41.073904 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.073919 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.073933 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.073946 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:28:41.073960 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:28:41.073974 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:28:41.073988 | orchestrator | 2025-03-23 13:28:41.074002 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-23 13:28:41.074064 | orchestrator | Sunday 23 March 2025 13:26:11 +0000 (0:00:03.712) 0:00:20.974 ********** 2025-03-23 13:28:41.074082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-23 13:28:41.074096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-23 13:28:41.074110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-23 13:28:41.074131 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-23 13:28:41.074146 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-23 13:28:41.074160 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-23 13:28:41.074173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074221 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074257 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 13:28:41.074272 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074316 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074344 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074360 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 13:28:41.074374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074402 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074416 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074430 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 13:28:41.074463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074478 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074492 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074506 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074553 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074568 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 13:28:41.074582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074637 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 13:28:41.074651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 13:28:41.074665 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 13:28:41.074679 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 13:28:41.074694 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 13:28:41.074720 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 13:28:41.074734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-23 13:28:41.074749 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-23 13:28:41.074763 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 13:28:41.074777 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-23 13:28:41.074791 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-23 13:28:41.074805 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-23 13:28:41.074819 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 13:28:41.074833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 13:28:41.074847 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-23 13:28:41.074861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 13:28:41.074875 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 13:28:41.074889 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 13:28:41.074903 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 13:28:41.074917 | orchestrator | 2025-03-23 13:28:41.074931 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.074945 | orchestrator | Sunday 23 March 2025 13:26:32 +0000 (0:00:21.376) 0:00:42.351 ********** 2025-03-23 13:28:41.074959 | orchestrator | 2025-03-23 13:28:41.074973 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.074987 | orchestrator | Sunday 23 March 2025 13:26:32 +0000 (0:00:00.118) 0:00:42.469 ********** 2025-03-23 13:28:41.075001 | orchestrator | 2025-03-23 13:28:41.075015 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.075029 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:00.450) 0:00:42.919 ********** 2025-03-23 13:28:41.075043 | orchestrator | 2025-03-23 13:28:41.075057 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.075071 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:00.155) 0:00:43.075 ********** 2025-03-23 13:28:41.075084 | orchestrator | 2025-03-23 13:28:41.075098 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.075112 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:00.170) 0:00:43.246 ********** 2025-03-23 13:28:41.075126 | orchestrator | 2025-03-23 13:28:41.075140 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 13:28:41.075154 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:00.125) 0:00:43.372 ********** 2025-03-23 13:28:41.075168 | orchestrator | 2025-03-23 13:28:41.075182 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-23 13:28:41.075195 | orchestrator | Sunday 23 March 2025 13:26:34 +0000 (0:00:00.623) 0:00:43.995 ********** 2025-03-23 13:28:41.075209 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.075237 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.075251 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.075265 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:28:41.075279 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:28:41.075293 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:28:41.075307 | orchestrator | 2025-03-23 13:28:41.075321 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-23 13:28:41.075335 | orchestrator | Sunday 23 March 2025 13:26:37 +0000 (0:00:02.663) 0:00:46.659 ********** 2025-03-23 13:28:41.075349 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.075362 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.075376 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.075390 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:28:41.075403 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:28:41.075417 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:28:41.075431 | orchestrator | 2025-03-23 13:28:41.075445 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-23 13:28:41.075458 | orchestrator | 2025-03-23 13:28:41.075472 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 13:28:41.075486 | orchestrator | Sunday 23 March 2025 13:26:58 +0000 (0:00:21.142) 0:01:07.801 ********** 2025-03-23 13:28:41.075500 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:28:41.075557 | orchestrator | 2025-03-23 13:28:41.075574 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 13:28:41.075589 | orchestrator | Sunday 23 March 2025 13:26:58 +0000 (0:00:00.846) 0:01:08.648 ********** 2025-03-23 13:28:41.075603 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:28:41.075617 | orchestrator | 2025-03-23 13:28:41.075638 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-23 13:28:41.075658 | orchestrator | Sunday 23 March 2025 13:27:00 +0000 (0:00:01.378) 0:01:10.026 ********** 2025-03-23 13:28:41.075672 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.075687 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.075700 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.075714 | orchestrator | 2025-03-23 13:28:41.075728 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-23 13:28:41.075742 | orchestrator | Sunday 23 March 2025 13:27:01 +0000 (0:00:01.212) 0:01:11.239 ********** 2025-03-23 13:28:41.075756 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.075769 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.075783 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.075797 | orchestrator | 2025-03-23 13:28:41.075811 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-23 13:28:41.075825 | orchestrator | Sunday 23 March 2025 13:27:01 +0000 (0:00:00.411) 0:01:11.650 ********** 2025-03-23 13:28:41.075838 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.075852 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.075866 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.075880 | orchestrator | 2025-03-23 13:28:41.075894 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-23 13:28:41.075908 | orchestrator | Sunday 23 March 2025 13:27:02 +0000 (0:00:00.563) 0:01:12.214 ********** 2025-03-23 13:28:41.075921 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.075935 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.075949 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.075963 | orchestrator | 2025-03-23 13:28:41.075977 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-23 13:28:41.075991 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:00.597) 0:01:12.811 ********** 2025-03-23 13:28:41.076004 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.076018 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.076032 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.076053 | orchestrator | 2025-03-23 13:28:41.076067 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-23 13:28:41.076081 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:00.570) 0:01:13.382 ********** 2025-03-23 13:28:41.076095 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076109 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076128 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076180 | orchestrator | 2025-03-23 13:28:41.076196 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-23 13:28:41.076210 | orchestrator | Sunday 23 March 2025 13:27:04 +0000 (0:00:00.466) 0:01:13.849 ********** 2025-03-23 13:28:41.076224 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076238 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076252 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076265 | orchestrator | 2025-03-23 13:28:41.076279 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-23 13:28:41.076293 | orchestrator | Sunday 23 March 2025 13:27:04 +0000 (0:00:00.787) 0:01:14.636 ********** 2025-03-23 13:28:41.076307 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076320 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076334 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076348 | orchestrator | 2025-03-23 13:28:41.076362 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-23 13:28:41.076375 | orchestrator | Sunday 23 March 2025 13:27:05 +0000 (0:00:00.669) 0:01:15.305 ********** 2025-03-23 13:28:41.076389 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076403 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076416 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076430 | orchestrator | 2025-03-23 13:28:41.076444 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-23 13:28:41.076458 | orchestrator | Sunday 23 March 2025 13:27:06 +0000 (0:00:00.592) 0:01:15.898 ********** 2025-03-23 13:28:41.076471 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076485 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076499 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076512 | orchestrator | 2025-03-23 13:28:41.076545 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-23 13:28:41.076559 | orchestrator | Sunday 23 March 2025 13:27:07 +0000 (0:00:01.580) 0:01:17.478 ********** 2025-03-23 13:28:41.076573 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076586 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076600 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076613 | orchestrator | 2025-03-23 13:28:41.076628 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-23 13:28:41.076641 | orchestrator | Sunday 23 March 2025 13:27:09 +0000 (0:00:01.887) 0:01:19.366 ********** 2025-03-23 13:28:41.076655 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076669 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076682 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076696 | orchestrator | 2025-03-23 13:28:41.076710 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-23 13:28:41.076723 | orchestrator | Sunday 23 March 2025 13:27:11 +0000 (0:00:01.419) 0:01:20.785 ********** 2025-03-23 13:28:41.076737 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076751 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076765 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076778 | orchestrator | 2025-03-23 13:28:41.076792 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-23 13:28:41.076806 | orchestrator | Sunday 23 March 2025 13:27:11 +0000 (0:00:00.762) 0:01:21.547 ********** 2025-03-23 13:28:41.076820 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076833 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076847 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076861 | orchestrator | 2025-03-23 13:28:41.076874 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-23 13:28:41.076895 | orchestrator | Sunday 23 March 2025 13:27:12 +0000 (0:00:00.965) 0:01:22.512 ********** 2025-03-23 13:28:41.076909 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.076923 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.076936 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.076950 | orchestrator | 2025-03-23 13:28:41.076970 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-23 13:28:41.076985 | orchestrator | Sunday 23 March 2025 13:27:13 +0000 (0:00:00.882) 0:01:23.395 ********** 2025-03-23 13:28:41.076999 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077013 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077027 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077041 | orchestrator | 2025-03-23 13:28:41.077055 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-23 13:28:41.077073 | orchestrator | Sunday 23 March 2025 13:27:15 +0000 (0:00:01.481) 0:01:24.876 ********** 2025-03-23 13:28:41.077087 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077101 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077115 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077129 | orchestrator | 2025-03-23 13:28:41.077143 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 13:28:41.077157 | orchestrator | Sunday 23 March 2025 13:27:15 +0000 (0:00:00.694) 0:01:25.571 ********** 2025-03-23 13:28:41.077171 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:28:41.077185 | orchestrator | 2025-03-23 13:28:41.077198 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-23 13:28:41.077212 | orchestrator | Sunday 23 March 2025 13:27:17 +0000 (0:00:01.148) 0:01:26.720 ********** 2025-03-23 13:28:41.077226 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.077240 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.077254 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.077268 | orchestrator | 2025-03-23 13:28:41.077282 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-23 13:28:41.077295 | orchestrator | Sunday 23 March 2025 13:27:18 +0000 (0:00:01.300) 0:01:28.022 ********** 2025-03-23 13:28:41.077309 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.077323 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.077336 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.077350 | orchestrator | 2025-03-23 13:28:41.077364 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-23 13:28:41.077378 | orchestrator | Sunday 23 March 2025 13:27:19 +0000 (0:00:01.573) 0:01:29.596 ********** 2025-03-23 13:28:41.077391 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077405 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077419 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077433 | orchestrator | 2025-03-23 13:28:41.077447 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-23 13:28:41.077461 | orchestrator | Sunday 23 March 2025 13:27:21 +0000 (0:00:01.158) 0:01:30.754 ********** 2025-03-23 13:28:41.077474 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077488 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077502 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077533 | orchestrator | 2025-03-23 13:28:41.077548 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-23 13:28:41.077562 | orchestrator | Sunday 23 March 2025 13:27:21 +0000 (0:00:00.586) 0:01:31.340 ********** 2025-03-23 13:28:41.077576 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077590 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077604 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077618 | orchestrator | 2025-03-23 13:28:41.077632 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-23 13:28:41.077652 | orchestrator | Sunday 23 March 2025 13:27:22 +0000 (0:00:00.416) 0:01:31.757 ********** 2025-03-23 13:28:41.077666 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077680 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077699 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077713 | orchestrator | 2025-03-23 13:28:41.077727 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-23 13:28:41.077741 | orchestrator | Sunday 23 March 2025 13:27:23 +0000 (0:00:01.130) 0:01:32.887 ********** 2025-03-23 13:28:41.077755 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077769 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077782 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077796 | orchestrator | 2025-03-23 13:28:41.077810 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-23 13:28:41.077824 | orchestrator | Sunday 23 March 2025 13:27:23 +0000 (0:00:00.650) 0:01:33.538 ********** 2025-03-23 13:28:41.077837 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.077851 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.077865 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.077879 | orchestrator | 2025-03-23 13:28:41.077893 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-23 13:28:41.077907 | orchestrator | Sunday 23 March 2025 13:27:24 +0000 (0:00:00.558) 0:01:34.097 ********** 2025-03-23 13:28:41.077921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.077937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.077958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.077979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.077994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078102 | orchestrator | 2025-03-23 13:28:41.078116 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-23 13:28:41.078130 | orchestrator | Sunday 23 March 2025 13:27:26 +0000 (0:00:02.277) 0:01:36.374 ********** 2025-03-23 13:28:41.078144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078290 | orchestrator | 2025-03-23 13:28:41.078304 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-23 13:28:41.078318 | orchestrator | Sunday 23 March 2025 13:27:32 +0000 (0:00:06.234) 0:01:42.609 ********** 2025-03-23 13:28:41.078332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.078484 | orchestrator | 2025-03-23 13:28:41.078498 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.078512 | orchestrator | Sunday 23 March 2025 13:27:35 +0000 (0:00:02.802) 0:01:45.411 ********** 2025-03-23 13:28:41.078784 | orchestrator | 2025-03-23 13:28:41.078808 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.078824 | orchestrator | Sunday 23 March 2025 13:27:35 +0000 (0:00:00.064) 0:01:45.475 ********** 2025-03-23 13:28:41.078838 | orchestrator | 2025-03-23 13:28:41.078852 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.078866 | orchestrator | Sunday 23 March 2025 13:27:35 +0000 (0:00:00.064) 0:01:45.539 ********** 2025-03-23 13:28:41.078880 | orchestrator | 2025-03-23 13:28:41.078893 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-23 13:28:41.078933 | orchestrator | Sunday 23 March 2025 13:27:36 +0000 (0:00:00.234) 0:01:45.774 ********** 2025-03-23 13:28:41.078948 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.078964 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.078977 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.078991 | orchestrator | 2025-03-23 13:28:41.079005 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-23 13:28:41.079019 | orchestrator | Sunday 23 March 2025 13:27:39 +0000 (0:00:02.930) 0:01:48.704 ********** 2025-03-23 13:28:41.079032 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.079046 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.079060 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.079073 | orchestrator | 2025-03-23 13:28:41.079087 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-23 13:28:41.079101 | orchestrator | Sunday 23 March 2025 13:27:47 +0000 (0:00:08.304) 0:01:57.009 ********** 2025-03-23 13:28:41.079114 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.079128 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.079142 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.079156 | orchestrator | 2025-03-23 13:28:41.079170 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-23 13:28:41.079183 | orchestrator | Sunday 23 March 2025 13:27:54 +0000 (0:00:06.905) 0:02:03.914 ********** 2025-03-23 13:28:41.079197 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.079210 | orchestrator | 2025-03-23 13:28:41.079224 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-23 13:28:41.079238 | orchestrator | Sunday 23 March 2025 13:27:54 +0000 (0:00:00.177) 0:02:04.092 ********** 2025-03-23 13:28:41.079274 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.079289 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.079303 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.079316 | orchestrator | 2025-03-23 13:28:41.079355 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-23 13:28:41.079371 | orchestrator | Sunday 23 March 2025 13:27:55 +0000 (0:00:00.930) 0:02:05.022 ********** 2025-03-23 13:28:41.079384 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.079398 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.079412 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.079425 | orchestrator | 2025-03-23 13:28:41.079440 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-23 13:28:41.079454 | orchestrator | Sunday 23 March 2025 13:27:55 +0000 (0:00:00.602) 0:02:05.625 ********** 2025-03-23 13:28:41.079467 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.079481 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.079495 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.079509 | orchestrator | 2025-03-23 13:28:41.079551 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-23 13:28:41.079565 | orchestrator | Sunday 23 March 2025 13:27:56 +0000 (0:00:00.874) 0:02:06.499 ********** 2025-03-23 13:28:41.079579 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.079593 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.079607 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.079621 | orchestrator | 2025-03-23 13:28:41.079634 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-23 13:28:41.079648 | orchestrator | Sunday 23 March 2025 13:27:57 +0000 (0:00:00.670) 0:02:07.170 ********** 2025-03-23 13:28:41.079662 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.079676 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.079690 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.079704 | orchestrator | 2025-03-23 13:28:41.079718 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-23 13:28:41.079731 | orchestrator | Sunday 23 March 2025 13:27:58 +0000 (0:00:01.085) 0:02:08.255 ********** 2025-03-23 13:28:41.079745 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.079759 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.079772 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.079786 | orchestrator | 2025-03-23 13:28:41.079800 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-23 13:28:41.079814 | orchestrator | Sunday 23 March 2025 13:27:59 +0000 (0:00:00.914) 0:02:09.170 ********** 2025-03-23 13:28:41.079827 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.079841 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.079855 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.079868 | orchestrator | 2025-03-23 13:28:41.079882 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-23 13:28:41.079896 | orchestrator | Sunday 23 March 2025 13:28:00 +0000 (0:00:00.491) 0:02:09.661 ********** 2025-03-23 13:28:41.079911 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.079928 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.079942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.079968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.079983 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.079997 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080019 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080048 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080063 | orchestrator | 2025-03-23 13:28:41.080077 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-23 13:28:41.080091 | orchestrator | Sunday 23 March 2025 13:28:01 +0000 (0:00:01.813) 0:02:11.475 ********** 2025-03-23 13:28:41.080105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080120 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080134 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080254 | orchestrator | 2025-03-23 13:28:41.080269 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-23 13:28:41.080283 | orchestrator | Sunday 23 March 2025 13:28:06 +0000 (0:00:04.824) 0:02:16.299 ********** 2025-03-23 13:28:41.080297 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080325 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080346 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080366 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080380 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080399 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080421 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080436 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:28:41.080450 | orchestrator | 2025-03-23 13:28:41.080464 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.080478 | orchestrator | Sunday 23 March 2025 13:28:09 +0000 (0:00:03.259) 0:02:19.559 ********** 2025-03-23 13:28:41.080492 | orchestrator | 2025-03-23 13:28:41.080506 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.080556 | orchestrator | Sunday 23 March 2025 13:28:10 +0000 (0:00:00.253) 0:02:19.812 ********** 2025-03-23 13:28:41.080571 | orchestrator | 2025-03-23 13:28:41.080585 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 13:28:41.080600 | orchestrator | Sunday 23 March 2025 13:28:10 +0000 (0:00:00.120) 0:02:19.932 ********** 2025-03-23 13:28:41.080613 | orchestrator | 2025-03-23 13:28:41.080628 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-23 13:28:41.080642 | orchestrator | Sunday 23 March 2025 13:28:10 +0000 (0:00:00.078) 0:02:20.010 ********** 2025-03-23 13:28:41.080656 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.080677 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.080691 | orchestrator | 2025-03-23 13:28:41.080705 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-23 13:28:41.080719 | orchestrator | Sunday 23 March 2025 13:28:17 +0000 (0:00:06.888) 0:02:26.899 ********** 2025-03-23 13:28:41.080733 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.080747 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.080761 | orchestrator | 2025-03-23 13:28:41.080775 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-23 13:28:41.080789 | orchestrator | Sunday 23 March 2025 13:28:23 +0000 (0:00:06.660) 0:02:33.560 ********** 2025-03-23 13:28:41.080802 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:28:41.080816 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:28:41.080830 | orchestrator | 2025-03-23 13:28:41.080844 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-23 13:28:41.080858 | orchestrator | Sunday 23 March 2025 13:28:31 +0000 (0:00:07.175) 0:02:40.735 ********** 2025-03-23 13:28:41.080872 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:28:41.080886 | orchestrator | 2025-03-23 13:28:41.080900 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-23 13:28:41.080914 | orchestrator | Sunday 23 March 2025 13:28:31 +0000 (0:00:00.158) 0:02:40.893 ********** 2025-03-23 13:28:41.080928 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.080942 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.080956 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.080969 | orchestrator | 2025-03-23 13:28:41.080983 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-23 13:28:41.080997 | orchestrator | Sunday 23 March 2025 13:28:32 +0000 (0:00:00.968) 0:02:41.862 ********** 2025-03-23 13:28:41.081011 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.081025 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.081039 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.081053 | orchestrator | 2025-03-23 13:28:41.081067 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-23 13:28:41.081081 | orchestrator | Sunday 23 March 2025 13:28:33 +0000 (0:00:00.884) 0:02:42.747 ********** 2025-03-23 13:28:41.081095 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.081117 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.081133 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.081147 | orchestrator | 2025-03-23 13:28:41.081162 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-23 13:28:41.081176 | orchestrator | Sunday 23 March 2025 13:28:34 +0000 (0:00:01.385) 0:02:44.132 ********** 2025-03-23 13:28:41.081190 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:28:41.081203 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:28:41.081217 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:28:41.081231 | orchestrator | 2025-03-23 13:28:41.081245 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-23 13:28:41.081259 | orchestrator | Sunday 23 March 2025 13:28:35 +0000 (0:00:01.077) 0:02:45.210 ********** 2025-03-23 13:28:41.081273 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.081287 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.081300 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.081314 | orchestrator | 2025-03-23 13:28:41.081329 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-23 13:28:41.081343 | orchestrator | Sunday 23 March 2025 13:28:36 +0000 (0:00:01.179) 0:02:46.389 ********** 2025-03-23 13:28:41.081357 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:28:41.081371 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:28:41.081385 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:28:41.081398 | orchestrator | 2025-03-23 13:28:41.081412 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:28:41.081427 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-23 13:28:41.081448 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-23 13:28:41.081469 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-23 13:28:44.120876 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:28:44.121012 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:28:44.121032 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:28:44.121047 | orchestrator | 2025-03-23 13:28:44.121062 | orchestrator | 2025-03-23 13:28:44.121077 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:28:44.121093 | orchestrator | Sunday 23 March 2025 13:28:38 +0000 (0:00:01.745) 0:02:48.134 ********** 2025-03-23 13:28:44.121107 | orchestrator | =============================================================================== 2025-03-23 13:28:44.121121 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.38s 2025-03-23 13:28:44.121135 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 21.14s 2025-03-23 13:28:44.121149 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.97s 2025-03-23 13:28:44.121162 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.08s 2025-03-23 13:28:44.121176 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.82s 2025-03-23 13:28:44.121190 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.23s 2025-03-23 13:28:44.121212 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.82s 2025-03-23 13:28:44.121226 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 3.73s 2025-03-23 13:28:44.121240 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.71s 2025-03-23 13:28:44.121254 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.26s 2025-03-23 13:28:44.121268 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.80s 2025-03-23 13:28:44.121282 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.66s 2025-03-23 13:28:44.121296 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.66s 2025-03-23 13:28:44.121310 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.50s 2025-03-23 13:28:44.121324 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.28s 2025-03-23 13:28:44.121338 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.99s 2025-03-23 13:28:44.121351 | orchestrator | ovn-db : Fail on existing OVN NB cluster with no leader ----------------- 1.89s 2025-03-23 13:28:44.121365 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.81s 2025-03-23 13:28:44.121379 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.75s 2025-03-23 13:28:44.121393 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.65s 2025-03-23 13:28:44.121408 | orchestrator | 2025-03-23 13:28:41 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:44.121424 | orchestrator | 2025-03-23 13:28:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:44.121455 | orchestrator | 2025-03-23 13:28:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:44.127999 | orchestrator | 2025-03-23 13:28:44 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:44.128249 | orchestrator | 2025-03-23 13:28:44 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:44.128363 | orchestrator | 2025-03-23 13:28:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:47.185502 | orchestrator | 2025-03-23 13:28:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:47.186837 | orchestrator | 2025-03-23 13:28:47 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:47.186880 | orchestrator | 2025-03-23 13:28:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:50.246144 | orchestrator | 2025-03-23 13:28:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:50.246255 | orchestrator | 2025-03-23 13:28:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:50.246600 | orchestrator | 2025-03-23 13:28:50 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:50.246632 | orchestrator | 2025-03-23 13:28:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:53.286120 | orchestrator | 2025-03-23 13:28:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:53.286227 | orchestrator | 2025-03-23 13:28:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:53.287865 | orchestrator | 2025-03-23 13:28:53 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:53.290189 | orchestrator | 2025-03-23 13:28:53 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:56.334854 | orchestrator | 2025-03-23 13:28:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:56.334959 | orchestrator | 2025-03-23 13:28:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:56.335023 | orchestrator | 2025-03-23 13:28:56 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:56.336331 | orchestrator | 2025-03-23 13:28:56 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:28:59.388302 | orchestrator | 2025-03-23 13:28:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:28:59.388390 | orchestrator | 2025-03-23 13:28:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:28:59.390007 | orchestrator | 2025-03-23 13:28:59 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:28:59.391144 | orchestrator | 2025-03-23 13:28:59 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:02.447850 | orchestrator | 2025-03-23 13:28:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:02.447930 | orchestrator | 2025-03-23 13:29:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:02.448317 | orchestrator | 2025-03-23 13:29:02 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:02.449333 | orchestrator | 2025-03-23 13:29:02 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:05.498201 | orchestrator | 2025-03-23 13:29:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:05.498327 | orchestrator | 2025-03-23 13:29:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:05.500372 | orchestrator | 2025-03-23 13:29:05 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:05.501254 | orchestrator | 2025-03-23 13:29:05 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:05.501481 | orchestrator | 2025-03-23 13:29:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:08.546958 | orchestrator | 2025-03-23 13:29:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:08.547591 | orchestrator | 2025-03-23 13:29:08 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:08.549366 | orchestrator | 2025-03-23 13:29:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:08.549697 | orchestrator | 2025-03-23 13:29:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:11.616347 | orchestrator | 2025-03-23 13:29:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:11.621557 | orchestrator | 2025-03-23 13:29:11 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:11.623044 | orchestrator | 2025-03-23 13:29:11 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:11.623291 | orchestrator | 2025-03-23 13:29:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:14.670244 | orchestrator | 2025-03-23 13:29:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:14.671274 | orchestrator | 2025-03-23 13:29:14 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:14.672234 | orchestrator | 2025-03-23 13:29:14 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:17.725884 | orchestrator | 2025-03-23 13:29:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:17.726014 | orchestrator | 2025-03-23 13:29:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:17.727898 | orchestrator | 2025-03-23 13:29:17 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:17.729624 | orchestrator | 2025-03-23 13:29:17 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:17.730735 | orchestrator | 2025-03-23 13:29:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:20.769405 | orchestrator | 2025-03-23 13:29:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:20.771370 | orchestrator | 2025-03-23 13:29:20 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:20.772176 | orchestrator | 2025-03-23 13:29:20 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:20.772815 | orchestrator | 2025-03-23 13:29:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:23.826921 | orchestrator | 2025-03-23 13:29:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:23.829472 | orchestrator | 2025-03-23 13:29:23 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:23.830642 | orchestrator | 2025-03-23 13:29:23 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state STARTED 2025-03-23 13:29:23.832864 | orchestrator | 2025-03-23 13:29:23 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:23.833608 | orchestrator | 2025-03-23 13:29:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:26.875364 | orchestrator | 2025-03-23 13:29:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:26.876401 | orchestrator | 2025-03-23 13:29:26 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:26.877610 | orchestrator | 2025-03-23 13:29:26 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state STARTED 2025-03-23 13:29:26.878942 | orchestrator | 2025-03-23 13:29:26 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:29.942825 | orchestrator | 2025-03-23 13:29:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:29.942906 | orchestrator | 2025-03-23 13:29:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:29.944246 | orchestrator | 2025-03-23 13:29:29 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:29.945280 | orchestrator | 2025-03-23 13:29:29 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state STARTED 2025-03-23 13:29:29.946849 | orchestrator | 2025-03-23 13:29:29 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:33.003789 | orchestrator | 2025-03-23 13:29:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:33.003898 | orchestrator | 2025-03-23 13:29:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:33.005792 | orchestrator | 2025-03-23 13:29:33 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:33.006558 | orchestrator | 2025-03-23 13:29:33 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state STARTED 2025-03-23 13:29:33.013515 | orchestrator | 2025-03-23 13:29:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:36.101775 | orchestrator | 2025-03-23 13:29:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:36.101883 | orchestrator | 2025-03-23 13:29:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:36.104980 | orchestrator | 2025-03-23 13:29:36 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:36.105714 | orchestrator | 2025-03-23 13:29:36 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state STARTED 2025-03-23 13:29:36.109803 | orchestrator | 2025-03-23 13:29:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:39.160911 | orchestrator | 2025-03-23 13:29:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:39.161027 | orchestrator | 2025-03-23 13:29:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:39.164924 | orchestrator | 2025-03-23 13:29:39 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:39.166952 | orchestrator | 2025-03-23 13:29:39 | INFO  | Task dd711ebf-1804-4229-b017-610b35d9ed01 is in state SUCCESS 2025-03-23 13:29:39.170759 | orchestrator | 2025-03-23 13:29:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:42.221696 | orchestrator | 2025-03-23 13:29:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:42.221801 | orchestrator | 2025-03-23 13:29:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:42.224432 | orchestrator | 2025-03-23 13:29:42 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:42.227000 | orchestrator | 2025-03-23 13:29:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:42.227438 | orchestrator | 2025-03-23 13:29:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:45.276465 | orchestrator | 2025-03-23 13:29:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:45.277784 | orchestrator | 2025-03-23 13:29:45 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:45.280360 | orchestrator | 2025-03-23 13:29:45 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:48.344383 | orchestrator | 2025-03-23 13:29:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:48.344495 | orchestrator | 2025-03-23 13:29:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:48.347366 | orchestrator | 2025-03-23 13:29:48 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:48.348297 | orchestrator | 2025-03-23 13:29:48 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:51.399994 | orchestrator | 2025-03-23 13:29:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:51.400091 | orchestrator | 2025-03-23 13:29:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:51.400136 | orchestrator | 2025-03-23 13:29:51 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:51.400711 | orchestrator | 2025-03-23 13:29:51 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:51.400998 | orchestrator | 2025-03-23 13:29:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:54.454271 | orchestrator | 2025-03-23 13:29:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:54.461363 | orchestrator | 2025-03-23 13:29:54 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:54.469128 | orchestrator | 2025-03-23 13:29:54 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:29:57.535164 | orchestrator | 2025-03-23 13:29:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:29:57.535279 | orchestrator | 2025-03-23 13:29:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:29:57.536609 | orchestrator | 2025-03-23 13:29:57 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:29:57.538932 | orchestrator | 2025-03-23 13:29:57 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:00.595052 | orchestrator | 2025-03-23 13:29:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:00.595135 | orchestrator | 2025-03-23 13:30:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:00.597324 | orchestrator | 2025-03-23 13:30:00 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:00.598384 | orchestrator | 2025-03-23 13:30:00 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:00.598548 | orchestrator | 2025-03-23 13:30:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:03.647330 | orchestrator | 2025-03-23 13:30:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:03.648353 | orchestrator | 2025-03-23 13:30:03 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:03.648467 | orchestrator | 2025-03-23 13:30:03 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:06.695139 | orchestrator | 2025-03-23 13:30:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:06.695251 | orchestrator | 2025-03-23 13:30:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:06.699802 | orchestrator | 2025-03-23 13:30:06 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:09.743313 | orchestrator | 2025-03-23 13:30:06 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:09.743442 | orchestrator | 2025-03-23 13:30:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:09.743477 | orchestrator | 2025-03-23 13:30:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:09.744766 | orchestrator | 2025-03-23 13:30:09 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:09.746667 | orchestrator | 2025-03-23 13:30:09 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:12.794685 | orchestrator | 2025-03-23 13:30:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:12.795521 | orchestrator | 2025-03-23 13:30:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:12.795909 | orchestrator | 2025-03-23 13:30:12 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:12.797107 | orchestrator | 2025-03-23 13:30:12 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:15.843800 | orchestrator | 2025-03-23 13:30:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:15.843878 | orchestrator | 2025-03-23 13:30:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:15.845770 | orchestrator | 2025-03-23 13:30:15 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:15.846811 | orchestrator | 2025-03-23 13:30:15 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:18.918882 | orchestrator | 2025-03-23 13:30:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:18.919010 | orchestrator | 2025-03-23 13:30:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:18.920761 | orchestrator | 2025-03-23 13:30:18 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:18.922880 | orchestrator | 2025-03-23 13:30:18 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:21.989285 | orchestrator | 2025-03-23 13:30:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:21.989404 | orchestrator | 2025-03-23 13:30:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:21.992186 | orchestrator | 2025-03-23 13:30:21 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:21.993250 | orchestrator | 2025-03-23 13:30:21 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:21.993330 | orchestrator | 2025-03-23 13:30:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:25.036689 | orchestrator | 2025-03-23 13:30:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:25.037676 | orchestrator | 2025-03-23 13:30:25 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:25.041523 | orchestrator | 2025-03-23 13:30:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:28.082762 | orchestrator | 2025-03-23 13:30:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:28.082899 | orchestrator | 2025-03-23 13:30:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:28.082960 | orchestrator | 2025-03-23 13:30:28 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:28.083830 | orchestrator | 2025-03-23 13:30:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:31.140305 | orchestrator | 2025-03-23 13:30:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:31.140446 | orchestrator | 2025-03-23 13:30:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:34.187212 | orchestrator | 2025-03-23 13:30:31 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:34.187315 | orchestrator | 2025-03-23 13:30:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:34.187333 | orchestrator | 2025-03-23 13:30:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:34.187365 | orchestrator | 2025-03-23 13:30:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:34.187487 | orchestrator | 2025-03-23 13:30:34 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:34.187508 | orchestrator | 2025-03-23 13:30:34 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:34.187528 | orchestrator | 2025-03-23 13:30:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:37.244915 | orchestrator | 2025-03-23 13:30:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:37.246141 | orchestrator | 2025-03-23 13:30:37 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:37.247768 | orchestrator | 2025-03-23 13:30:37 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:40.307905 | orchestrator | 2025-03-23 13:30:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:40.308019 | orchestrator | 2025-03-23 13:30:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:40.310253 | orchestrator | 2025-03-23 13:30:40 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:40.312179 | orchestrator | 2025-03-23 13:30:40 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:43.375963 | orchestrator | 2025-03-23 13:30:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:43.376078 | orchestrator | 2025-03-23 13:30:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:43.377682 | orchestrator | 2025-03-23 13:30:43 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:43.379362 | orchestrator | 2025-03-23 13:30:43 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:46.441848 | orchestrator | 2025-03-23 13:30:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:46.441964 | orchestrator | 2025-03-23 13:30:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:46.442409 | orchestrator | 2025-03-23 13:30:46 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:46.443171 | orchestrator | 2025-03-23 13:30:46 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:49.503090 | orchestrator | 2025-03-23 13:30:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:49.503224 | orchestrator | 2025-03-23 13:30:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:49.504661 | orchestrator | 2025-03-23 13:30:49 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:49.510296 | orchestrator | 2025-03-23 13:30:49 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:52.559675 | orchestrator | 2025-03-23 13:30:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:52.559818 | orchestrator | 2025-03-23 13:30:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:52.560754 | orchestrator | 2025-03-23 13:30:52 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:52.562795 | orchestrator | 2025-03-23 13:30:52 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:55.612143 | orchestrator | 2025-03-23 13:30:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:55.612273 | orchestrator | 2025-03-23 13:30:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:55.614287 | orchestrator | 2025-03-23 13:30:55 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:55.614654 | orchestrator | 2025-03-23 13:30:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:30:58.666320 | orchestrator | 2025-03-23 13:30:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:30:58.666447 | orchestrator | 2025-03-23 13:30:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:30:58.667144 | orchestrator | 2025-03-23 13:30:58 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:30:58.670234 | orchestrator | 2025-03-23 13:30:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:01.721710 | orchestrator | 2025-03-23 13:30:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:01.721811 | orchestrator | 2025-03-23 13:31:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:01.727073 | orchestrator | 2025-03-23 13:31:01 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:01.727106 | orchestrator | 2025-03-23 13:31:01 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:04.774664 | orchestrator | 2025-03-23 13:31:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:04.774796 | orchestrator | 2025-03-23 13:31:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:04.775694 | orchestrator | 2025-03-23 13:31:04 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:04.776855 | orchestrator | 2025-03-23 13:31:04 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:07.830122 | orchestrator | 2025-03-23 13:31:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:07.830231 | orchestrator | 2025-03-23 13:31:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:07.832204 | orchestrator | 2025-03-23 13:31:07 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:07.838239 | orchestrator | 2025-03-23 13:31:07 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:10.891002 | orchestrator | 2025-03-23 13:31:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:10.891136 | orchestrator | 2025-03-23 13:31:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:10.894005 | orchestrator | 2025-03-23 13:31:10 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:10.898359 | orchestrator | 2025-03-23 13:31:10 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:10.900179 | orchestrator | 2025-03-23 13:31:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:13.964984 | orchestrator | 2025-03-23 13:31:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:13.967207 | orchestrator | 2025-03-23 13:31:13 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:13.968981 | orchestrator | 2025-03-23 13:31:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:13.969085 | orchestrator | 2025-03-23 13:31:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:17.021250 | orchestrator | 2025-03-23 13:31:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:17.024984 | orchestrator | 2025-03-23 13:31:17 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:17.029722 | orchestrator | 2025-03-23 13:31:17 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:20.072785 | orchestrator | 2025-03-23 13:31:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:20.072921 | orchestrator | 2025-03-23 13:31:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:20.073988 | orchestrator | 2025-03-23 13:31:20 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:20.075884 | orchestrator | 2025-03-23 13:31:20 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:20.075989 | orchestrator | 2025-03-23 13:31:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:23.121132 | orchestrator | 2025-03-23 13:31:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:23.121603 | orchestrator | 2025-03-23 13:31:23 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:23.121643 | orchestrator | 2025-03-23 13:31:23 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:26.176159 | orchestrator | 2025-03-23 13:31:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:26.176323 | orchestrator | 2025-03-23 13:31:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:26.177982 | orchestrator | 2025-03-23 13:31:26 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:26.178072 | orchestrator | 2025-03-23 13:31:26 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:29.241654 | orchestrator | 2025-03-23 13:31:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:29.241763 | orchestrator | 2025-03-23 13:31:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:29.241911 | orchestrator | 2025-03-23 13:31:29 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:29.246681 | orchestrator | 2025-03-23 13:31:29 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:32.290553 | orchestrator | 2025-03-23 13:31:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:32.290699 | orchestrator | 2025-03-23 13:31:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:32.292950 | orchestrator | 2025-03-23 13:31:32 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:32.294610 | orchestrator | 2025-03-23 13:31:32 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:32.295117 | orchestrator | 2025-03-23 13:31:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:35.347886 | orchestrator | 2025-03-23 13:31:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:35.348094 | orchestrator | 2025-03-23 13:31:35 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:35.349508 | orchestrator | 2025-03-23 13:31:35 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:38.408008 | orchestrator | 2025-03-23 13:31:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:38.408136 | orchestrator | 2025-03-23 13:31:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:38.408213 | orchestrator | 2025-03-23 13:31:38 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:38.409852 | orchestrator | 2025-03-23 13:31:38 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:38.412334 | orchestrator | 2025-03-23 13:31:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:41.458965 | orchestrator | 2025-03-23 13:31:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:41.459730 | orchestrator | 2025-03-23 13:31:41 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:41.460744 | orchestrator | 2025-03-23 13:31:41 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:44.509878 | orchestrator | 2025-03-23 13:31:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:44.510006 | orchestrator | 2025-03-23 13:31:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:47.568937 | orchestrator | 2025-03-23 13:31:44 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:47.569030 | orchestrator | 2025-03-23 13:31:44 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:47.569048 | orchestrator | 2025-03-23 13:31:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:47.569079 | orchestrator | 2025-03-23 13:31:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:47.572201 | orchestrator | 2025-03-23 13:31:47 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:47.574267 | orchestrator | 2025-03-23 13:31:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:50.632856 | orchestrator | 2025-03-23 13:31:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:50.632980 | orchestrator | 2025-03-23 13:31:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:50.635616 | orchestrator | 2025-03-23 13:31:50 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:50.637839 | orchestrator | 2025-03-23 13:31:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:50.637907 | orchestrator | 2025-03-23 13:31:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:53.693149 | orchestrator | 2025-03-23 13:31:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:53.695264 | orchestrator | 2025-03-23 13:31:53 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:53.697525 | orchestrator | 2025-03-23 13:31:53 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:53.697815 | orchestrator | 2025-03-23 13:31:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:56.745015 | orchestrator | 2025-03-23 13:31:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:56.746128 | orchestrator | 2025-03-23 13:31:56 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:56.746980 | orchestrator | 2025-03-23 13:31:56 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:31:59.787511 | orchestrator | 2025-03-23 13:31:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:31:59.787669 | orchestrator | 2025-03-23 13:31:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:31:59.788514 | orchestrator | 2025-03-23 13:31:59 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:31:59.790380 | orchestrator | 2025-03-23 13:31:59 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:02.839136 | orchestrator | 2025-03-23 13:31:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:02.839242 | orchestrator | 2025-03-23 13:32:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:02.839310 | orchestrator | 2025-03-23 13:32:02 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:02.840350 | orchestrator | 2025-03-23 13:32:02 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:05.909668 | orchestrator | 2025-03-23 13:32:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:05.909780 | orchestrator | 2025-03-23 13:32:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:05.910114 | orchestrator | 2025-03-23 13:32:05 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:05.915946 | orchestrator | 2025-03-23 13:32:05 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:08.967228 | orchestrator | 2025-03-23 13:32:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:08.967348 | orchestrator | 2025-03-23 13:32:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:08.967424 | orchestrator | 2025-03-23 13:32:08 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:08.968427 | orchestrator | 2025-03-23 13:32:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:12.017411 | orchestrator | 2025-03-23 13:32:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:12.017549 | orchestrator | 2025-03-23 13:32:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:12.020767 | orchestrator | 2025-03-23 13:32:12 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:12.022264 | orchestrator | 2025-03-23 13:32:12 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:12.022553 | orchestrator | 2025-03-23 13:32:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:15.083480 | orchestrator | 2025-03-23 13:32:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:15.084769 | orchestrator | 2025-03-23 13:32:15 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:15.086685 | orchestrator | 2025-03-23 13:32:15 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:15.086798 | orchestrator | 2025-03-23 13:32:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:18.130273 | orchestrator | 2025-03-23 13:32:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:18.130482 | orchestrator | 2025-03-23 13:32:18 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:18.133563 | orchestrator | 2025-03-23 13:32:18 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:18.133756 | orchestrator | 2025-03-23 13:32:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:21.184956 | orchestrator | 2025-03-23 13:32:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:21.186214 | orchestrator | 2025-03-23 13:32:21 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:21.188142 | orchestrator | 2025-03-23 13:32:21 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:24.234910 | orchestrator | 2025-03-23 13:32:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:24.235018 | orchestrator | 2025-03-23 13:32:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:24.236854 | orchestrator | 2025-03-23 13:32:24 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:24.237563 | orchestrator | 2025-03-23 13:32:24 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:24.237697 | orchestrator | 2025-03-23 13:32:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:27.298681 | orchestrator | 2025-03-23 13:32:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:27.300500 | orchestrator | 2025-03-23 13:32:27 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:27.301421 | orchestrator | 2025-03-23 13:32:27 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:30.341879 | orchestrator | 2025-03-23 13:32:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:30.341984 | orchestrator | 2025-03-23 13:32:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:30.344959 | orchestrator | 2025-03-23 13:32:30 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:30.349869 | orchestrator | 2025-03-23 13:32:30 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:33.397683 | orchestrator | 2025-03-23 13:32:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:33.397801 | orchestrator | 2025-03-23 13:32:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:36.438922 | orchestrator | 2025-03-23 13:32:33 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:36.439015 | orchestrator | 2025-03-23 13:32:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:36.439046 | orchestrator | 2025-03-23 13:32:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:36.439076 | orchestrator | 2025-03-23 13:32:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:36.439565 | orchestrator | 2025-03-23 13:32:36 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:36.439620 | orchestrator | 2025-03-23 13:32:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:39.485646 | orchestrator | 2025-03-23 13:32:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:39.485764 | orchestrator | 2025-03-23 13:32:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:39.487998 | orchestrator | 2025-03-23 13:32:39 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:39.491222 | orchestrator | 2025-03-23 13:32:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:42.555723 | orchestrator | 2025-03-23 13:32:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:42.555852 | orchestrator | 2025-03-23 13:32:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:42.557014 | orchestrator | 2025-03-23 13:32:42 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:42.558700 | orchestrator | 2025-03-23 13:32:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:42.558921 | orchestrator | 2025-03-23 13:32:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:45.604700 | orchestrator | 2025-03-23 13:32:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:45.607051 | orchestrator | 2025-03-23 13:32:45 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:45.608951 | orchestrator | 2025-03-23 13:32:45 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:48.666773 | orchestrator | 2025-03-23 13:32:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:48.666894 | orchestrator | 2025-03-23 13:32:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:48.667625 | orchestrator | 2025-03-23 13:32:48 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:48.669022 | orchestrator | 2025-03-23 13:32:48 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:48.669696 | orchestrator | 2025-03-23 13:32:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:51.723816 | orchestrator | 2025-03-23 13:32:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:51.724068 | orchestrator | 2025-03-23 13:32:51 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:51.726476 | orchestrator | 2025-03-23 13:32:51 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:54.775145 | orchestrator | 2025-03-23 13:32:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:54.775245 | orchestrator | 2025-03-23 13:32:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:54.776292 | orchestrator | 2025-03-23 13:32:54 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:54.777494 | orchestrator | 2025-03-23 13:32:54 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:32:57.828398 | orchestrator | 2025-03-23 13:32:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:32:57.828517 | orchestrator | 2025-03-23 13:32:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:32:57.828855 | orchestrator | 2025-03-23 13:32:57 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:32:57.830598 | orchestrator | 2025-03-23 13:32:57 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:00.890928 | orchestrator | 2025-03-23 13:32:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:00.891057 | orchestrator | 2025-03-23 13:33:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:00.892001 | orchestrator | 2025-03-23 13:33:00 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:33:00.893689 | orchestrator | 2025-03-23 13:33:00 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:00.893762 | orchestrator | 2025-03-23 13:33:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:03.944983 | orchestrator | 2025-03-23 13:33:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:03.945145 | orchestrator | 2025-03-23 13:33:03 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:33:03.945173 | orchestrator | 2025-03-23 13:33:03 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:06.987229 | orchestrator | 2025-03-23 13:33:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:06.987348 | orchestrator | 2025-03-23 13:33:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:06.989342 | orchestrator | 2025-03-23 13:33:06 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state STARTED 2025-03-23 13:33:06.993616 | orchestrator | 2025-03-23 13:33:06 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:10.061809 | orchestrator | 2025-03-23 13:33:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:10.061914 | orchestrator | 2025-03-23 13:33:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:10.064839 | orchestrator | 2025-03-23 13:33:10 | INFO  | Task f77e125a-d72a-4daf-89ef-ff84837807c1 is in state SUCCESS 2025-03-23 13:33:10.065075 | orchestrator | 2025-03-23 13:33:10.066833 | orchestrator | None 2025-03-23 13:33:10.066876 | orchestrator | 2025-03-23 13:33:10.066892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:33:10.066907 | orchestrator | 2025-03-23 13:33:10.066921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:33:10.066935 | orchestrator | Sunday 23 March 2025 13:24:10 +0000 (0:00:00.713) 0:00:00.713 ********** 2025-03-23 13:33:10.066949 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.066965 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.066979 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.066993 | orchestrator | 2025-03-23 13:33:10.067007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:33:10.067021 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:00.912) 0:00:01.626 ********** 2025-03-23 13:33:10.067036 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-23 13:33:10.067050 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-23 13:33:10.067064 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-23 13:33:10.067078 | orchestrator | 2025-03-23 13:33:10.067092 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-23 13:33:10.067106 | orchestrator | 2025-03-23 13:33:10.067120 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-23 13:33:10.067134 | orchestrator | Sunday 23 March 2025 13:24:12 +0000 (0:00:00.534) 0:00:02.160 ********** 2025-03-23 13:33:10.067147 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.067162 | orchestrator | 2025-03-23 13:33:10.067176 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-23 13:33:10.067190 | orchestrator | Sunday 23 March 2025 13:24:13 +0000 (0:00:01.231) 0:00:03.392 ********** 2025-03-23 13:33:10.067204 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.067219 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.067233 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.067246 | orchestrator | 2025-03-23 13:33:10.067260 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-23 13:33:10.067274 | orchestrator | Sunday 23 March 2025 13:24:14 +0000 (0:00:01.280) 0:00:04.673 ********** 2025-03-23 13:33:10.067288 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.067302 | orchestrator | 2025-03-23 13:33:10.067316 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-23 13:33:10.067355 | orchestrator | Sunday 23 March 2025 13:24:15 +0000 (0:00:00.593) 0:00:05.266 ********** 2025-03-23 13:33:10.067369 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.067383 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.067397 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.067411 | orchestrator | 2025-03-23 13:33:10.067425 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-23 13:33:10.067439 | orchestrator | Sunday 23 March 2025 13:24:16 +0000 (0:00:01.488) 0:00:06.754 ********** 2025-03-23 13:33:10.067453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067505 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 13:33:10.067613 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 13:33:10.067628 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 13:33:10.067643 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 13:33:10.067657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 13:33:10.067733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 13:33:10.067748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 13:33:10.067762 | orchestrator | 2025-03-23 13:33:10.067776 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-23 13:33:10.067790 | orchestrator | Sunday 23 March 2025 13:24:20 +0000 (0:00:03.450) 0:00:10.205 ********** 2025-03-23 13:33:10.067804 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-23 13:33:10.067832 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-23 13:33:10.067847 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-23 13:33:10.067861 | orchestrator | 2025-03-23 13:33:10.067875 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-23 13:33:10.067889 | orchestrator | Sunday 23 March 2025 13:24:21 +0000 (0:00:01.550) 0:00:11.755 ********** 2025-03-23 13:33:10.067903 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-23 13:33:10.067917 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-23 13:33:10.067931 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-23 13:33:10.067945 | orchestrator | 2025-03-23 13:33:10.067959 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-23 13:33:10.067973 | orchestrator | Sunday 23 March 2025 13:24:24 +0000 (0:00:02.223) 0:00:13.979 ********** 2025-03-23 13:33:10.067987 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-23 13:33:10.068001 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.068025 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-23 13:33:10.068040 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.068055 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-23 13:33:10.068069 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.068082 | orchestrator | 2025-03-23 13:33:10.068097 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-23 13:33:10.068111 | orchestrator | Sunday 23 March 2025 13:24:25 +0000 (0:00:01.479) 0:00:15.458 ********** 2025-03-23 13:33:10.068127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.068243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.068283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.068319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.068334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.068348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.068363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.068377 | orchestrator | 2025-03-23 13:33:10.068392 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-23 13:33:10.068406 | orchestrator | Sunday 23 March 2025 13:24:29 +0000 (0:00:04.381) 0:00:19.839 ********** 2025-03-23 13:33:10.068420 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.068441 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.068455 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.068469 | orchestrator | 2025-03-23 13:33:10.068489 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-23 13:33:10.068504 | orchestrator | Sunday 23 March 2025 13:24:33 +0000 (0:00:03.209) 0:00:23.049 ********** 2025-03-23 13:33:10.068517 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-03-23 13:33:10.068531 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-03-23 13:33:10.068545 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-03-23 13:33:10.068559 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-03-23 13:33:10.068573 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-03-23 13:33:10.068620 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-03-23 13:33:10.068634 | orchestrator | 2025-03-23 13:33:10.068648 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-23 13:33:10.068662 | orchestrator | Sunday 23 March 2025 13:24:39 +0000 (0:00:06.574) 0:00:29.624 ********** 2025-03-23 13:33:10.068676 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.068689 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.068704 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.068717 | orchestrator | 2025-03-23 13:33:10.068731 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-23 13:33:10.068746 | orchestrator | Sunday 23 March 2025 13:24:42 +0000 (0:00:02.920) 0:00:32.545 ********** 2025-03-23 13:33:10.068760 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.068774 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.068788 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.068802 | orchestrator | 2025-03-23 13:33:10.068816 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-23 13:33:10.068829 | orchestrator | Sunday 23 March 2025 13:24:47 +0000 (0:00:05.259) 0:00:37.804 ********** 2025-03-23 13:33:10.068844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.068859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.068874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.068896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.068918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.068933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.068948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.068962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.068978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.068992 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.069006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.069027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069042 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.069063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069078 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.069092 | orchestrator | 2025-03-23 13:33:10.069106 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-23 13:33:10.069120 | orchestrator | Sunday 23 March 2025 13:24:51 +0000 (0:00:03.363) 0:00:41.167 ********** 2025-03-23 13:33:10.069134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.069220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.069279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.069301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069626 | orchestrator | 2025-03-23 13:33:10.069649 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-23 13:33:10.069663 | orchestrator | Sunday 23 March 2025 13:24:57 +0000 (0:00:06.112) 0:00:47.279 ********** 2025-03-23 13:33:10.069678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.069793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.069808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.069843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.069880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.069894 | orchestrator | 2025-03-23 13:33:10.069909 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-23 13:33:10.069923 | orchestrator | Sunday 23 March 2025 13:25:01 +0000 (0:00:03.811) 0:00:51.091 ********** 2025-03-23 13:33:10.069943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 13:33:10.069959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 13:33:10.069973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 13:33:10.069987 | orchestrator | 2025-03-23 13:33:10.070001 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-23 13:33:10.070064 | orchestrator | Sunday 23 March 2025 13:25:04 +0000 (0:00:03.393) 0:00:54.485 ********** 2025-03-23 13:33:10.070083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 13:33:10.070098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 13:33:10.070111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 13:33:10.070125 | orchestrator | 2025-03-23 13:33:10.070139 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-23 13:33:10.070154 | orchestrator | Sunday 23 March 2025 13:25:12 +0000 (0:00:07.484) 0:01:01.970 ********** 2025-03-23 13:33:10.070168 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.070183 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.070196 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.070211 | orchestrator | 2025-03-23 13:33:10.070246 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-23 13:33:10.070263 | orchestrator | Sunday 23 March 2025 13:25:13 +0000 (0:00:01.687) 0:01:03.657 ********** 2025-03-23 13:33:10.070279 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 13:33:10.070365 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 13:33:10.070382 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 13:33:10.070398 | orchestrator | 2025-03-23 13:33:10.070414 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-23 13:33:10.070429 | orchestrator | Sunday 23 March 2025 13:25:18 +0000 (0:00:04.460) 0:01:08.118 ********** 2025-03-23 13:33:10.070445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 13:33:10.070461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 13:33:10.070478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 13:33:10.070493 | orchestrator | 2025-03-23 13:33:10.070509 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-23 13:33:10.070524 | orchestrator | Sunday 23 March 2025 13:25:23 +0000 (0:00:04.863) 0:01:12.981 ********** 2025-03-23 13:33:10.070540 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-23 13:33:10.070566 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-23 13:33:10.070604 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-23 13:33:10.070620 | orchestrator | 2025-03-23 13:33:10.070634 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-23 13:33:10.070648 | orchestrator | Sunday 23 March 2025 13:25:26 +0000 (0:00:03.301) 0:01:16.282 ********** 2025-03-23 13:33:10.070662 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-23 13:33:10.070676 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-23 13:33:10.070690 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-23 13:33:10.070703 | orchestrator | 2025-03-23 13:33:10.070815 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-23 13:33:10.070831 | orchestrator | Sunday 23 March 2025 13:25:29 +0000 (0:00:02.797) 0:01:19.080 ********** 2025-03-23 13:33:10.070845 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.070859 | orchestrator | 2025-03-23 13:33:10.070873 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-23 13:33:10.070886 | orchestrator | Sunday 23 March 2025 13:25:30 +0000 (0:00:01.240) 0:01:20.320 ********** 2025-03-23 13:33:10.070901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.070926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.070963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.070979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.070994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.071008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.071023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.071043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.071063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.071086 | orchestrator | 2025-03-23 13:33:10.071100 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-23 13:33:10.071114 | orchestrator | Sunday 23 March 2025 13:25:33 +0000 (0:00:03.277) 0:01:23.598 ********** 2025-03-23 13:33:10.071128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071172 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.071186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071243 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.071262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071306 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.071337 | orchestrator | 2025-03-23 13:33:10.071353 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-23 13:33:10.071367 | orchestrator | Sunday 23 March 2025 13:25:34 +0000 (0:00:00.908) 0:01:24.506 ********** 2025-03-23 13:33:10.071381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071438 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.071472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071537 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.071553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 13:33:10.071568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 13:33:10.071600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 13:33:10.071622 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.071636 | orchestrator | 2025-03-23 13:33:10.071711 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-23 13:33:10.071726 | orchestrator | Sunday 23 March 2025 13:25:35 +0000 (0:00:01.343) 0:01:25.850 ********** 2025-03-23 13:33:10.071752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 13:33:10.071768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 13:33:10.071782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 13:33:10.071796 | orchestrator | 2025-03-23 13:33:10.071810 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-23 13:33:10.071824 | orchestrator | Sunday 23 March 2025 13:25:39 +0000 (0:00:03.515) 0:01:29.365 ********** 2025-03-23 13:33:10.071838 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 13:33:10.071853 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 13:33:10.071867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 13:33:10.071880 | orchestrator | 2025-03-23 13:33:10.071894 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-23 13:33:10.071909 | orchestrator | Sunday 23 March 2025 13:25:41 +0000 (0:00:02.484) 0:01:31.850 ********** 2025-03-23 13:33:10.071922 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:33:10.071936 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:33:10.071950 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:33:10.071964 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:33:10.071978 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.071993 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:33:10.072007 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.072021 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:33:10.072035 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.072049 | orchestrator | 2025-03-23 13:33:10.072063 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-23 13:33:10.072077 | orchestrator | Sunday 23 March 2025 13:25:43 +0000 (0:00:01.471) 0:01:33.321 ********** 2025-03-23 13:33:10.072096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.072111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.072134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 13:33:10.072156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.073482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.073522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 13:33:10.073541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.073555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.073602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.073617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 13:33:10.073644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.073658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b', '__omit_place_holder__b16e7e38fb98ba78154786e15e86a2a94f2a5f7b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 13:33:10.073671 | orchestrator | 2025-03-23 13:33:10.073683 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-23 13:33:10.073696 | orchestrator | Sunday 23 March 2025 13:25:46 +0000 (0:00:03.259) 0:01:36.581 ********** 2025-03-23 13:33:10.073732 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.073745 | orchestrator | 2025-03-23 13:33:10.073759 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-23 13:33:10.073772 | orchestrator | Sunday 23 March 2025 13:25:47 +0000 (0:00:01.053) 0:01:37.634 ********** 2025-03-23 13:33:10.073786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 13:33:10.073808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.073823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.073862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.073877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 13:33:10.073924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.073939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.073959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.073984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 13:33:10.074003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.074124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074169 | orchestrator | 2025-03-23 13:33:10.074183 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-23 13:33:10.074198 | orchestrator | Sunday 23 March 2025 13:25:54 +0000 (0:00:06.779) 0:01:44.414 ********** 2025-03-23 13:33:10.074212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 13:33:10.074233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.074247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074284 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.074303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 13:33:10.074321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.074342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074370 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.074385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 13:33:10.074406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.074427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074459 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.074477 | orchestrator | 2025-03-23 13:33:10.074490 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-23 13:33:10.074502 | orchestrator | Sunday 23 March 2025 13:25:55 +0000 (0:00:00.994) 0:01:45.408 ********** 2025-03-23 13:33:10.074515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074540 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.074553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074597 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.074610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 13:33:10.074635 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.074648 | orchestrator | 2025-03-23 13:33:10.074660 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-23 13:33:10.074673 | orchestrator | Sunday 23 March 2025 13:25:57 +0000 (0:00:01.950) 0:01:47.359 ********** 2025-03-23 13:33:10.074685 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.074698 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.074710 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.074723 | orchestrator | 2025-03-23 13:33:10.074735 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-23 13:33:10.074748 | orchestrator | Sunday 23 March 2025 13:25:59 +0000 (0:00:01.734) 0:01:49.094 ********** 2025-03-23 13:33:10.074760 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.074773 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.074785 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.074797 | orchestrator | 2025-03-23 13:33:10.074810 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-23 13:33:10.074822 | orchestrator | Sunday 23 March 2025 13:26:02 +0000 (0:00:03.210) 0:01:52.304 ********** 2025-03-23 13:33:10.074835 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.074847 | orchestrator | 2025-03-23 13:33:10.074860 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-23 13:33:10.074872 | orchestrator | Sunday 23 March 2025 13:26:03 +0000 (0:00:00.913) 0:01:53.218 ********** 2025-03-23 13:33:10.074893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.074914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.074969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.074988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.075022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075048 | orchestrator | 2025-03-23 13:33:10.075060 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-23 13:33:10.075073 | orchestrator | Sunday 23 March 2025 13:26:11 +0000 (0:00:07.671) 0:02:00.890 ********** 2025-03-23 13:33:10.075093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.075113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075146 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.075159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.075180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075206 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.075225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.075244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.075270 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.075283 | orchestrator | 2025-03-23 13:33:10.075295 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-23 13:33:10.075308 | orchestrator | Sunday 23 March 2025 13:26:12 +0000 (0:00:01.827) 0:02:02.718 ********** 2025-03-23 13:33:10.075320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075347 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.075360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075402 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.075415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 13:33:10.075427 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.075440 | orchestrator | 2025-03-23 13:33:10.075452 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-23 13:33:10.075464 | orchestrator | Sunday 23 March 2025 13:26:14 +0000 (0:00:01.475) 0:02:04.193 ********** 2025-03-23 13:33:10.075481 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.075494 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.075506 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.075518 | orchestrator | 2025-03-23 13:33:10.075531 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-23 13:33:10.075543 | orchestrator | Sunday 23 March 2025 13:26:15 +0000 (0:00:01.520) 0:02:05.714 ********** 2025-03-23 13:33:10.075555 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.075567 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.075597 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.075610 | orchestrator | 2025-03-23 13:33:10.075622 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-23 13:33:10.075634 | orchestrator | Sunday 23 March 2025 13:26:17 +0000 (0:00:02.125) 0:02:07.839 ********** 2025-03-23 13:33:10.075647 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.075659 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.075672 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.075684 | orchestrator | 2025-03-23 13:33:10.075702 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-23 13:33:10.075716 | orchestrator | Sunday 23 March 2025 13:26:18 +0000 (0:00:00.285) 0:02:08.124 ********** 2025-03-23 13:33:10.075728 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.075740 | orchestrator | 2025-03-23 13:33:10.075753 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-23 13:33:10.075765 | orchestrator | Sunday 23 March 2025 13:26:19 +0000 (0:00:00.805) 0:02:08.930 ********** 2025-03-23 13:33:10.075778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 13:33:10.075802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 13:33:10.075816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 13:33:10.075836 | orchestrator | 2025-03-23 13:33:10.075848 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-23 13:33:10.075861 | orchestrator | Sunday 23 March 2025 13:26:22 +0000 (0:00:03.290) 0:02:12.220 ********** 2025-03-23 13:33:10.075873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 13:33:10.075886 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.075912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 13:33:10.075926 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.075939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 13:33:10.075952 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.075965 | orchestrator | 2025-03-23 13:33:10.075977 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-23 13:33:10.075989 | orchestrator | Sunday 23 March 2025 13:26:24 +0000 (0:00:02.361) 0:02:14.581 ********** 2025-03-23 13:33:10.076002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076036 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.076049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076074 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.076086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 13:33:10.076122 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.076135 | orchestrator | 2025-03-23 13:33:10.076147 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-23 13:33:10.076160 | orchestrator | Sunday 23 March 2025 13:26:27 +0000 (0:00:03.045) 0:02:17.627 ********** 2025-03-23 13:33:10.076172 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.076184 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.076197 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.076209 | orchestrator | 2025-03-23 13:33:10.076221 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-23 13:33:10.076234 | orchestrator | Sunday 23 March 2025 13:26:28 +0000 (0:00:00.871) 0:02:18.499 ********** 2025-03-23 13:33:10.076245 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.076258 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.076270 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.076282 | orchestrator | 2025-03-23 13:33:10.076294 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-23 13:33:10.076307 | orchestrator | Sunday 23 March 2025 13:26:30 +0000 (0:00:01.973) 0:02:20.473 ********** 2025-03-23 13:33:10.076319 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.076331 | orchestrator | 2025-03-23 13:33:10.076343 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-23 13:33:10.076355 | orchestrator | Sunday 23 March 2025 13:26:31 +0000 (0:00:00.917) 0:02:21.390 ********** 2025-03-23 13:33:10.076367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.076386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.076455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.076532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076640 | orchestrator | 2025-03-23 13:33:10.076654 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-23 13:33:10.076671 | orchestrator | Sunday 23 March 2025 13:26:38 +0000 (0:00:06.633) 0:02:28.024 ********** 2025-03-23 13:33:10.076684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.076697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076750 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.076763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.076784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.076817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077100 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.077118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.077137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077180 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.077190 | orchestrator | 2025-03-23 13:33:10.077200 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-23 13:33:10.077211 | orchestrator | Sunday 23 March 2025 13:26:39 +0000 (0:00:01.849) 0:02:29.873 ********** 2025-03-23 13:33:10.077221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077247 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.077258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077283 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.077294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 13:33:10.077314 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.077324 | orchestrator | 2025-03-23 13:33:10.077334 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-23 13:33:10.077344 | orchestrator | Sunday 23 March 2025 13:26:42 +0000 (0:00:02.392) 0:02:32.265 ********** 2025-03-23 13:33:10.077354 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.077364 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.077374 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.077384 | orchestrator | 2025-03-23 13:33:10.077394 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-23 13:33:10.077404 | orchestrator | Sunday 23 March 2025 13:26:44 +0000 (0:00:02.119) 0:02:34.385 ********** 2025-03-23 13:33:10.077414 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.077424 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.077434 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.077444 | orchestrator | 2025-03-23 13:33:10.077454 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-23 13:33:10.077464 | orchestrator | Sunday 23 March 2025 13:26:47 +0000 (0:00:02.581) 0:02:36.967 ********** 2025-03-23 13:33:10.077474 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.077484 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.077494 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.077508 | orchestrator | 2025-03-23 13:33:10.077518 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-23 13:33:10.077528 | orchestrator | Sunday 23 March 2025 13:26:47 +0000 (0:00:00.298) 0:02:37.266 ********** 2025-03-23 13:33:10.077539 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.077549 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.077559 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.077569 | orchestrator | 2025-03-23 13:33:10.077595 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-23 13:33:10.077606 | orchestrator | Sunday 23 March 2025 13:26:47 +0000 (0:00:00.439) 0:02:37.705 ********** 2025-03-23 13:33:10.077616 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.077626 | orchestrator | 2025-03-23 13:33:10.077637 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-23 13:33:10.077647 | orchestrator | Sunday 23 March 2025 13:26:49 +0000 (0:00:01.196) 0:02:38.902 ********** 2025-03-23 13:33:10.077657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:33:10.077678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.077697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:33:10.077867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.077880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:33:10.077972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.077983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.077996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078094 | orchestrator | 2025-03-23 13:33:10.078109 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-23 13:33:10.078120 | orchestrator | Sunday 23 March 2025 13:26:54 +0000 (0:00:05.748) 0:02:44.650 ********** 2025-03-23 13:33:10.078152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:33:10.078164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.078175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078246 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.078256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:33:10.078267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.078283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078346 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.078362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:33:10.078374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:33:10.078390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.078455 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.078465 | orchestrator | 2025-03-23 13:33:10.078475 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-23 13:33:10.078486 | orchestrator | Sunday 23 March 2025 13:26:56 +0000 (0:00:01.449) 0:02:46.099 ********** 2025-03-23 13:33:10.078496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078523 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.078533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078553 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.078563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 13:33:10.078599 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.078610 | orchestrator | 2025-03-23 13:33:10.078620 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-23 13:33:10.078630 | orchestrator | Sunday 23 March 2025 13:26:58 +0000 (0:00:02.072) 0:02:48.171 ********** 2025-03-23 13:33:10.078640 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.078650 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.078660 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.078670 | orchestrator | 2025-03-23 13:33:10.078680 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-23 13:33:10.078690 | orchestrator | Sunday 23 March 2025 13:27:00 +0000 (0:00:02.048) 0:02:50.219 ********** 2025-03-23 13:33:10.078700 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.078710 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.078720 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.078731 | orchestrator | 2025-03-23 13:33:10.078741 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-23 13:33:10.078751 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:02.767) 0:02:52.987 ********** 2025-03-23 13:33:10.078761 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.078771 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.078781 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.078791 | orchestrator | 2025-03-23 13:33:10.078802 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-23 13:33:10.078816 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:00.803) 0:02:53.790 ********** 2025-03-23 13:33:10.078826 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.078836 | orchestrator | 2025-03-23 13:33:10.078846 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-23 13:33:10.078856 | orchestrator | Sunday 23 March 2025 13:27:05 +0000 (0:00:01.776) 0:02:55.567 ********** 2025-03-23 13:33:10.078874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:33:10.078892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.078916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:33:10.078944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.078979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:33:10.079016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.079028 | orchestrator | 2025-03-23 13:33:10.079038 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-23 13:33:10.079090 | orchestrator | Sunday 23 March 2025 13:27:17 +0000 (0:00:11.644) 0:03:07.211 ********** 2025-03-23 13:33:10.079107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:33:10.079131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.079143 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.079162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:33:10.079200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.079217 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.079228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:33:10.079252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.079275 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.079286 | orchestrator | 2025-03-23 13:33:10.079296 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-23 13:33:10.079310 | orchestrator | Sunday 23 March 2025 13:27:24 +0000 (0:00:06.893) 0:03:14.104 ********** 2025-03-23 13:33:10.079321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079343 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.079354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079380 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.079391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 13:33:10.079418 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.079428 | orchestrator | 2025-03-23 13:33:10.079438 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-23 13:33:10.079448 | orchestrator | Sunday 23 March 2025 13:27:31 +0000 (0:00:07.498) 0:03:21.603 ********** 2025-03-23 13:33:10.079459 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.079469 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.079479 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.079489 | orchestrator | 2025-03-23 13:33:10.079500 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-23 13:33:10.079510 | orchestrator | Sunday 23 March 2025 13:27:33 +0000 (0:00:01.670) 0:03:23.273 ********** 2025-03-23 13:33:10.079520 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.079530 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.079541 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.079551 | orchestrator | 2025-03-23 13:33:10.079561 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-23 13:33:10.079572 | orchestrator | Sunday 23 March 2025 13:27:36 +0000 (0:00:02.637) 0:03:25.911 ********** 2025-03-23 13:33:10.079629 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.079640 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.079650 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.079660 | orchestrator | 2025-03-23 13:33:10.079671 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-23 13:33:10.079681 | orchestrator | Sunday 23 March 2025 13:27:36 +0000 (0:00:00.778) 0:03:26.689 ********** 2025-03-23 13:33:10.079691 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.079701 | orchestrator | 2025-03-23 13:33:10.079711 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-23 13:33:10.079719 | orchestrator | Sunday 23 March 2025 13:27:38 +0000 (0:00:01.461) 0:03:28.151 ********** 2025-03-23 13:33:10.079728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:33:10.079738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:33:10.079756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:33:10.079766 | orchestrator | 2025-03-23 13:33:10.079774 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-23 13:33:10.079783 | orchestrator | Sunday 23 March 2025 13:27:43 +0000 (0:00:04.959) 0:03:33.111 ********** 2025-03-23 13:33:10.079855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:33:10.079865 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.079874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:33:10.079883 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.079892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:33:10.079901 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.079910 | orchestrator | 2025-03-23 13:33:10.079918 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-23 13:33:10.079927 | orchestrator | Sunday 23 March 2025 13:27:43 +0000 (0:00:00.432) 0:03:33.543 ********** 2025-03-23 13:33:10.079936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.079952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.079961 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.079970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.079979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.079988 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.079996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.080009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 13:33:10.080018 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.080027 | orchestrator | 2025-03-23 13:33:10.080036 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-23 13:33:10.080044 | orchestrator | Sunday 23 March 2025 13:27:44 +0000 (0:00:00.986) 0:03:34.530 ********** 2025-03-23 13:33:10.080053 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.080061 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.080070 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.080078 | orchestrator | 2025-03-23 13:33:10.080087 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-23 13:33:10.080095 | orchestrator | Sunday 23 March 2025 13:27:45 +0000 (0:00:01.248) 0:03:35.779 ********** 2025-03-23 13:33:10.080104 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.080113 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.080121 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.080130 | orchestrator | 2025-03-23 13:33:10.080138 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-23 13:33:10.080147 | orchestrator | Sunday 23 March 2025 13:27:47 +0000 (0:00:02.054) 0:03:37.834 ********** 2025-03-23 13:33:10.080155 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.080164 | orchestrator | 2025-03-23 13:33:10.080172 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-03-23 13:33:10.080181 | orchestrator | Sunday 23 March 2025 13:27:49 +0000 (0:00:01.080) 0:03:38.914 ********** 2025-03-23 13:33:10.080190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.080277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080286 | orchestrator | 2025-03-23 13:33:10.080299 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-03-23 13:33:10.080308 | orchestrator | Sunday 23 March 2025 13:27:55 +0000 (0:00:06.368) 0:03:45.283 ********** 2025-03-23 13:33:10.080317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080348 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.080357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080389 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.080397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.080419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.080428 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.080437 | orchestrator | 2025-03-23 13:33:10.080445 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-03-23 13:33:10.080454 | orchestrator | Sunday 23 March 2025 13:27:56 +0000 (0:00:00.910) 0:03:46.194 ********** 2025-03-23 13:33:10.080463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080503 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.080511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080550 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.080559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-23 13:33:10.080613 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.080625 | orchestrator | 2025-03-23 13:33:10.080634 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-03-23 13:33:10.080643 | orchestrator | Sunday 23 March 2025 13:27:57 +0000 (0:00:01.117) 0:03:47.312 ********** 2025-03-23 13:33:10.080651 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.080660 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.080668 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.080677 | orchestrator | 2025-03-23 13:33:10.080686 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-03-23 13:33:10.080694 | orchestrator | Sunday 23 March 2025 13:27:58 +0000 (0:00:01.555) 0:03:48.867 ********** 2025-03-23 13:33:10.080703 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.080711 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.080720 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.080728 | orchestrator | 2025-03-23 13:33:10.080740 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-23 13:33:10.080749 | orchestrator | Sunday 23 March 2025 13:28:01 +0000 (0:00:02.541) 0:03:51.409 ********** 2025-03-23 13:33:10.080758 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.080766 | orchestrator | 2025-03-23 13:33:10.080775 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-23 13:33:10.080783 | orchestrator | Sunday 23 March 2025 13:28:02 +0000 (0:00:01.203) 0:03:52.612 ********** 2025-03-23 13:33:10.080798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:33:10.080813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:33:10.080829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:33:10.081024 | orchestrator | 2025-03-23 13:33:10.081037 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-23 13:33:10.081046 | orchestrator | Sunday 23 March 2025 13:28:08 +0000 (0:00:05.347) 0:03:57.960 ********** 2025-03-23 13:33:10.081055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:33:10.081065 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.081079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:33:10.081095 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.081104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:33:10.081113 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.081122 | orchestrator | 2025-03-23 13:33:10.081134 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-23 13:33:10.081147 | orchestrator | Sunday 23 March 2025 13:28:08 +0000 (0:00:00.897) 0:03:58.858 ********** 2025-03-23 13:33:10.081157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 13:33:10.081205 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.081218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 13:33:10.081263 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.081272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 13:33:10.081307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 13:33:10.081316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 13:33:10.081324 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.081333 | orchestrator | 2025-03-23 13:33:10.081341 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-23 13:33:10.081350 | orchestrator | Sunday 23 March 2025 13:28:10 +0000 (0:00:01.680) 0:04:00.538 ********** 2025-03-23 13:33:10.081358 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.081367 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.081375 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.081384 | orchestrator | 2025-03-23 13:33:10.081392 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-23 13:33:10.081401 | orchestrator | Sunday 23 March 2025 13:28:12 +0000 (0:00:01.669) 0:04:02.208 ********** 2025-03-23 13:33:10.081409 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.081418 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.081426 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.081435 | orchestrator | 2025-03-23 13:33:10.081443 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-23 13:33:10.081451 | orchestrator | Sunday 23 March 2025 13:28:15 +0000 (0:00:02.715) 0:04:04.923 ********** 2025-03-23 13:33:10.081460 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.081469 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.081477 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.081485 | orchestrator | 2025-03-23 13:33:10.081494 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-23 13:33:10.081503 | orchestrator | Sunday 23 March 2025 13:28:15 +0000 (0:00:00.537) 0:04:05.461 ********** 2025-03-23 13:33:10.081511 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.081520 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.081528 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.081537 | orchestrator | 2025-03-23 13:33:10.081545 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-23 13:33:10.081554 | orchestrator | Sunday 23 March 2025 13:28:15 +0000 (0:00:00.342) 0:04:05.804 ********** 2025-03-23 13:33:10.081562 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.081571 | orchestrator | 2025-03-23 13:33:10.081595 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-23 13:33:10.081604 | orchestrator | Sunday 23 March 2025 13:28:17 +0000 (0:00:01.411) 0:04:07.215 ********** 2025-03-23 13:33:10.081613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:33:10.081627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.081718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.081733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:33:10.081744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.081755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.081770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:33:10.081785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.082072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.082088 | orchestrator | 2025-03-23 13:33:10.082097 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-23 13:33:10.082106 | orchestrator | Sunday 23 March 2025 13:28:22 +0000 (0:00:05.496) 0:04:12.712 ********** 2025-03-23 13:33:10.082115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:33:10.082125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.082141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.082150 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.082163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:33:10.082173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.082182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.082191 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.082201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:33:10.082218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:33:10.082227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:33:10.082236 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.082245 | orchestrator | 2025-03-23 13:33:10.082254 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-23 13:33:10.082263 | orchestrator | Sunday 23 March 2025 13:28:23 +0000 (0:00:01.057) 0:04:13.769 ********** 2025-03-23 13:33:10.082275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.082288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.082298 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.082307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.082316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.082326 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.083270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.083286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-23 13:33:10.083304 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.083313 | orchestrator | 2025-03-23 13:33:10.083322 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-23 13:33:10.083331 | orchestrator | Sunday 23 March 2025 13:28:25 +0000 (0:00:01.720) 0:04:15.489 ********** 2025-03-23 13:33:10.083339 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.083348 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.083357 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.083366 | orchestrator | 2025-03-23 13:33:10.083374 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-23 13:33:10.083383 | orchestrator | Sunday 23 March 2025 13:28:27 +0000 (0:00:01.664) 0:04:17.154 ********** 2025-03-23 13:33:10.083392 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.083401 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.083409 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.083418 | orchestrator | 2025-03-23 13:33:10.083427 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-23 13:33:10.083436 | orchestrator | Sunday 23 March 2025 13:28:29 +0000 (0:00:02.527) 0:04:19.682 ********** 2025-03-23 13:33:10.083444 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.083453 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.083462 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.083470 | orchestrator | 2025-03-23 13:33:10.083484 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-23 13:33:10.083493 | orchestrator | Sunday 23 March 2025 13:28:30 +0000 (0:00:00.340) 0:04:20.023 ********** 2025-03-23 13:33:10.083502 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.083510 | orchestrator | 2025-03-23 13:33:10.083519 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-23 13:33:10.083528 | orchestrator | Sunday 23 March 2025 13:28:32 +0000 (0:00:01.971) 0:04:21.994 ********** 2025-03-23 13:33:10.083538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:33:10.083572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.083638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:33:10.083658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.083667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:33:10.083676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.083685 | orchestrator | 2025-03-23 13:33:10.083854 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-23 13:33:10.083866 | orchestrator | Sunday 23 March 2025 13:28:38 +0000 (0:00:06.673) 0:04:28.668 ********** 2025-03-23 13:33:10.083898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:33:10.084184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.084196 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.084205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:33:10.084216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.084225 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.084283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:33:10.084296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.084311 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.084395 | orchestrator | 2025-03-23 13:33:10.084407 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-23 13:33:10.084416 | orchestrator | Sunday 23 March 2025 13:28:40 +0000 (0:00:01.781) 0:04:30.449 ********** 2025-03-23 13:33:10.084426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084494 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.084504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084513 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.084525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 13:33:10.084544 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.084553 | orchestrator | 2025-03-23 13:33:10.084562 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-23 13:33:10.085013 | orchestrator | Sunday 23 March 2025 13:28:42 +0000 (0:00:01.476) 0:04:31.926 ********** 2025-03-23 13:33:10.085033 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.085042 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.085051 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.085059 | orchestrator | 2025-03-23 13:33:10.085068 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-23 13:33:10.085076 | orchestrator | Sunday 23 March 2025 13:28:43 +0000 (0:00:01.590) 0:04:33.517 ********** 2025-03-23 13:33:10.085085 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.085093 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.085102 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.085111 | orchestrator | 2025-03-23 13:33:10.085119 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-23 13:33:10.085128 | orchestrator | Sunday 23 March 2025 13:28:46 +0000 (0:00:02.493) 0:04:36.010 ********** 2025-03-23 13:33:10.085137 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.085145 | orchestrator | 2025-03-23 13:33:10.085154 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-23 13:33:10.085162 | orchestrator | Sunday 23 March 2025 13:28:47 +0000 (0:00:01.298) 0:04:37.309 ********** 2025-03-23 13:33:10.085224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 13:33:10.085304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 13:33:10.085332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 13:33:10.085438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085843 | orchestrator | 2025-03-23 13:33:10.085855 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-23 13:33:10.085863 | orchestrator | Sunday 23 March 2025 13:28:52 +0000 (0:00:05.163) 0:04:42.472 ********** 2025-03-23 13:33:10.085880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 13:33:10.085890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085915 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.085938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 13:33:10.085952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.085983 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.085992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 13:33:10.086006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.086129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.086209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.086221 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.086230 | orchestrator | 2025-03-23 13:33:10.086283 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-23 13:33:10.086294 | orchestrator | Sunday 23 March 2025 13:28:53 +0000 (0:00:00.754) 0:04:43.226 ********** 2025-03-23 13:33:10.086302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086368 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.086716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086745 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.086754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 13:33:10.086771 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.086779 | orchestrator | 2025-03-23 13:33:10.086788 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-23 13:33:10.086796 | orchestrator | Sunday 23 March 2025 13:28:54 +0000 (0:00:01.156) 0:04:44.383 ********** 2025-03-23 13:33:10.086805 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.086813 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.086822 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.086830 | orchestrator | 2025-03-23 13:33:10.086839 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-23 13:33:10.086847 | orchestrator | Sunday 23 March 2025 13:28:55 +0000 (0:00:01.458) 0:04:45.841 ********** 2025-03-23 13:33:10.086856 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.086864 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.086873 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.086881 | orchestrator | 2025-03-23 13:33:10.086890 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-23 13:33:10.086898 | orchestrator | Sunday 23 March 2025 13:28:58 +0000 (0:00:02.547) 0:04:48.389 ********** 2025-03-23 13:33:10.086906 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.086915 | orchestrator | 2025-03-23 13:33:10.086923 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-23 13:33:10.086939 | orchestrator | Sunday 23 March 2025 13:29:00 +0000 (0:00:01.593) 0:04:49.982 ********** 2025-03-23 13:33:10.086948 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:33:10.086956 | orchestrator | 2025-03-23 13:33:10.086965 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-23 13:33:10.086973 | orchestrator | Sunday 23 March 2025 13:29:03 +0000 (0:00:03.798) 0:04:53.780 ********** 2025-03-23 13:33:10.086982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087076 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.087086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087118 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087198 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.087206 | orchestrator | 2025-03-23 13:33:10.087215 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-23 13:33:10.087223 | orchestrator | Sunday 23 March 2025 13:29:07 +0000 (0:00:04.088) 0:04:57.869 ********** 2025-03-23 13:33:10.087232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087305 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.087314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087344 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 13:33:10.087413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 13:33:10.087422 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.087446 | orchestrator | 2025-03-23 13:33:10.087455 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-23 13:33:10.087463 | orchestrator | Sunday 23 March 2025 13:29:12 +0000 (0:00:04.375) 0:05:02.245 ********** 2025-03-23 13:33:10.087472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087503 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087534 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.087601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 13:33:10.087629 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.087638 | orchestrator | 2025-03-23 13:33:10.087646 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-23 13:33:10.087665 | orchestrator | Sunday 23 March 2025 13:29:16 +0000 (0:00:04.101) 0:05:06.346 ********** 2025-03-23 13:33:10.087674 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.087682 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.087690 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.087699 | orchestrator | 2025-03-23 13:33:10.087707 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-23 13:33:10.087716 | orchestrator | Sunday 23 March 2025 13:29:19 +0000 (0:00:02.629) 0:05:08.976 ********** 2025-03-23 13:33:10.087724 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087732 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.087740 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.087749 | orchestrator | 2025-03-23 13:33:10.087757 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-23 13:33:10.087766 | orchestrator | Sunday 23 March 2025 13:29:21 +0000 (0:00:02.593) 0:05:11.569 ********** 2025-03-23 13:33:10.087774 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087783 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.087792 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.087800 | orchestrator | 2025-03-23 13:33:10.087809 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-23 13:33:10.087817 | orchestrator | Sunday 23 March 2025 13:29:22 +0000 (0:00:00.454) 0:05:12.024 ********** 2025-03-23 13:33:10.087826 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.087834 | orchestrator | 2025-03-23 13:33:10.087843 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-23 13:33:10.087851 | orchestrator | Sunday 23 March 2025 13:29:23 +0000 (0:00:01.622) 0:05:13.647 ********** 2025-03-23 13:33:10.087860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 13:33:10.087870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 13:33:10.087921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 13:33:10.087941 | orchestrator | 2025-03-23 13:33:10.087950 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-23 13:33:10.087958 | orchestrator | Sunday 23 March 2025 13:29:25 +0000 (0:00:01.689) 0:05:15.336 ********** 2025-03-23 13:33:10.087967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 13:33:10.087976 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.087992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 13:33:10.088002 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.088011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 13:33:10.088020 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.088028 | orchestrator | 2025-03-23 13:33:10.088036 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-23 13:33:10.088045 | orchestrator | Sunday 23 March 2025 13:29:25 +0000 (0:00:00.508) 0:05:15.845 ********** 2025-03-23 13:33:10.088054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 13:33:10.088063 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.088071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 13:33:10.088084 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.088093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 13:33:10.088101 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.088110 | orchestrator | 2025-03-23 13:33:10.088160 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-23 13:33:10.088172 | orchestrator | Sunday 23 March 2025 13:29:26 +0000 (0:00:00.900) 0:05:16.746 ********** 2025-03-23 13:33:10.088181 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.088190 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.088199 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.088207 | orchestrator | 2025-03-23 13:33:10.088216 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-23 13:33:10.088225 | orchestrator | Sunday 23 March 2025 13:29:27 +0000 (0:00:00.800) 0:05:17.546 ********** 2025-03-23 13:33:10.088233 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.088242 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.088250 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.088259 | orchestrator | 2025-03-23 13:33:10.088267 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-23 13:33:10.088276 | orchestrator | Sunday 23 March 2025 13:29:29 +0000 (0:00:02.092) 0:05:19.639 ********** 2025-03-23 13:33:10.088284 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.088293 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.088301 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.088310 | orchestrator | 2025-03-23 13:33:10.088318 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-23 13:33:10.088327 | orchestrator | Sunday 23 March 2025 13:29:30 +0000 (0:00:00.342) 0:05:19.982 ********** 2025-03-23 13:33:10.088336 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.088353 | orchestrator | 2025-03-23 13:33:10.088363 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-23 13:33:10.088371 | orchestrator | Sunday 23 March 2025 13:29:31 +0000 (0:00:01.730) 0:05:21.712 ********** 2025-03-23 13:33:10.088380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:33:10.088389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:33:10.088454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.088505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.088663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.088765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.088796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.088867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.088906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.088915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.088965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.088983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.088992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.089014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.089063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:33:10.089093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.089184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.089244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.089299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.089334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.089341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089348 | orchestrator | 2025-03-23 13:33:10.089355 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-23 13:33:10.089362 | orchestrator | Sunday 23 March 2025 13:29:38 +0000 (0:00:06.921) 0:05:28.634 ********** 2025-03-23 13:33:10.089406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:33:10.089432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.089505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:33:10.089541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:33:10.089653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.089668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.089728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:33:10.089816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.089844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.089926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.089949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.089972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.090034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.090062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.090077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.090085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.090107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.090114 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.090160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:33:10.090176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.090206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:33:10.090249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.090278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:33:10.090287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090302 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.090309 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.090316 | orchestrator | 2025-03-23 13:33:10.090323 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-23 13:33:10.090333 | orchestrator | Sunday 23 March 2025 13:29:40 +0000 (0:00:02.195) 0:05:30.830 ********** 2025-03-23 13:33:10.090340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090355 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.090364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090378 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.090385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 13:33:10.090403 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.090413 | orchestrator | 2025-03-23 13:33:10.090420 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-23 13:33:10.090427 | orchestrator | Sunday 23 March 2025 13:29:43 +0000 (0:00:02.232) 0:05:33.062 ********** 2025-03-23 13:33:10.090434 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.090441 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.090465 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.090473 | orchestrator | 2025-03-23 13:33:10.090480 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-23 13:33:10.090487 | orchestrator | Sunday 23 March 2025 13:29:44 +0000 (0:00:01.596) 0:05:34.659 ********** 2025-03-23 13:33:10.090502 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.090510 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.090517 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.090524 | orchestrator | 2025-03-23 13:33:10.090531 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-23 13:33:10.090538 | orchestrator | Sunday 23 March 2025 13:29:47 +0000 (0:00:02.663) 0:05:37.323 ********** 2025-03-23 13:33:10.090545 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.090552 | orchestrator | 2025-03-23 13:33:10.090559 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-23 13:33:10.090566 | orchestrator | Sunday 23 March 2025 13:29:49 +0000 (0:00:01.840) 0:05:39.163 ********** 2025-03-23 13:33:10.090573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.090598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.090612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.090624 | orchestrator | 2025-03-23 13:33:10.090631 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-23 13:33:10.090638 | orchestrator | Sunday 23 March 2025 13:29:54 +0000 (0:00:04.738) 0:05:43.902 ********** 2025-03-23 13:33:10.090663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.090672 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.090679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.090687 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.090694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.090701 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.090708 | orchestrator | 2025-03-23 13:33:10.090715 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-23 13:33:10.090726 | orchestrator | Sunday 23 March 2025 13:29:54 +0000 (0:00:00.720) 0:05:44.622 ********** 2025-03-23 13:33:10.090733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090748 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.090755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090769 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.090776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 13:33:10.090791 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.090798 | orchestrator | 2025-03-23 13:33:10.090805 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-23 13:33:10.090826 | orchestrator | Sunday 23 March 2025 13:29:56 +0000 (0:00:01.364) 0:05:45.986 ********** 2025-03-23 13:33:10.090834 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.090842 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.090848 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.090855 | orchestrator | 2025-03-23 13:33:10.090862 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-23 13:33:10.090869 | orchestrator | Sunday 23 March 2025 13:29:57 +0000 (0:00:01.503) 0:05:47.489 ********** 2025-03-23 13:33:10.090876 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.090883 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.090890 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.090897 | orchestrator | 2025-03-23 13:33:10.090904 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-23 13:33:10.090911 | orchestrator | Sunday 23 March 2025 13:30:00 +0000 (0:00:02.572) 0:05:50.061 ********** 2025-03-23 13:33:10.090918 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.090925 | orchestrator | 2025-03-23 13:33:10.090932 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-23 13:33:10.090939 | orchestrator | Sunday 23 March 2025 13:30:01 +0000 (0:00:01.780) 0:05:51.842 ********** 2025-03-23 13:33:10.090954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.090966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.090997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.091011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.091038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091069 | orchestrator | 2025-03-23 13:33:10.091076 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-23 13:33:10.091083 | orchestrator | Sunday 23 March 2025 13:30:08 +0000 (0:00:06.618) 0:05:58.460 ********** 2025-03-23 13:33:10.091090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.091107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091122 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.091129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.091152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091172 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.091185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.091192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.091207 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.091214 | orchestrator | 2025-03-23 13:33:10.091221 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-23 13:33:10.091228 | orchestrator | Sunday 23 March 2025 13:30:10 +0000 (0:00:01.528) 0:05:59.989 ********** 2025-03-23 13:33:10.091235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091280 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.091287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091320 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.091327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 13:33:10.091355 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.091362 | orchestrator | 2025-03-23 13:33:10.091369 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-23 13:33:10.091376 | orchestrator | Sunday 23 March 2025 13:30:11 +0000 (0:00:01.604) 0:06:01.594 ********** 2025-03-23 13:33:10.091384 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.091391 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.091397 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.091404 | orchestrator | 2025-03-23 13:33:10.091411 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-23 13:33:10.091418 | orchestrator | Sunday 23 March 2025 13:30:13 +0000 (0:00:01.683) 0:06:03.277 ********** 2025-03-23 13:33:10.091425 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.091432 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.091439 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.091446 | orchestrator | 2025-03-23 13:33:10.091453 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-23 13:33:10.091460 | orchestrator | Sunday 23 March 2025 13:30:16 +0000 (0:00:02.710) 0:06:05.988 ********** 2025-03-23 13:33:10.091467 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.091474 | orchestrator | 2025-03-23 13:33:10.091484 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-23 13:33:10.091491 | orchestrator | Sunday 23 March 2025 13:30:17 +0000 (0:00:01.829) 0:06:07.817 ********** 2025-03-23 13:33:10.091498 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2025-03-23 13:33:10.091505 | orchestrator | 2025-03-23 13:33:10.091512 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-23 13:33:10.091519 | orchestrator | Sunday 23 March 2025 13:30:19 +0000 (0:00:01.533) 0:06:09.351 ********** 2025-03-23 13:33:10.091541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 13:33:10.091554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 13:33:10.091567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 13:33:10.091574 | orchestrator | 2025-03-23 13:33:10.091594 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-23 13:33:10.091601 | orchestrator | Sunday 23 March 2025 13:30:24 +0000 (0:00:05.390) 0:06:14.741 ********** 2025-03-23 13:33:10.091609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091616 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.091623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091630 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.091638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091645 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.091652 | orchestrator | 2025-03-23 13:33:10.091659 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-23 13:33:10.091666 | orchestrator | Sunday 23 March 2025 13:30:26 +0000 (0:00:01.970) 0:06:16.712 ********** 2025-03-23 13:33:10.091673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091692 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.091699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091734 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.091741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 13:33:10.091755 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.091762 | orchestrator | 2025-03-23 13:33:10.091769 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 13:33:10.091776 | orchestrator | Sunday 23 March 2025 13:30:28 +0000 (0:00:02.168) 0:06:18.880 ********** 2025-03-23 13:33:10.091783 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.091790 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.091797 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.091804 | orchestrator | 2025-03-23 13:33:10.091811 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 13:33:10.091817 | orchestrator | Sunday 23 March 2025 13:30:33 +0000 (0:00:04.026) 0:06:22.906 ********** 2025-03-23 13:33:10.091824 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.091831 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.091838 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.091845 | orchestrator | 2025-03-23 13:33:10.091852 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-23 13:33:10.091859 | orchestrator | Sunday 23 March 2025 13:30:37 +0000 (0:00:04.590) 0:06:27.496 ********** 2025-03-23 13:33:10.091869 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-23 13:33:10.091876 | orchestrator | 2025-03-23 13:33:10.091883 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-23 13:33:10.091890 | orchestrator | Sunday 23 March 2025 13:30:39 +0000 (0:00:01.530) 0:06:29.026 ********** 2025-03-23 13:33:10.091897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091904 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.091912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091923 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.091930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.091938 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.091944 | orchestrator | 2025-03-23 13:33:10.091951 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-23 13:33:10.091958 | orchestrator | Sunday 23 March 2025 13:30:41 +0000 (0:00:02.071) 0:06:31.098 ********** 2025-03-23 13:33:10.091998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.092007 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.092022 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 13:33:10.092036 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.092043 | orchestrator | 2025-03-23 13:33:10.092050 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-23 13:33:10.092057 | orchestrator | Sunday 23 March 2025 13:30:43 +0000 (0:00:01.837) 0:06:32.935 ********** 2025-03-23 13:33:10.092064 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092071 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092078 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.092089 | orchestrator | 2025-03-23 13:33:10.092096 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 13:33:10.092103 | orchestrator | Sunday 23 March 2025 13:30:45 +0000 (0:00:02.409) 0:06:35.345 ********** 2025-03-23 13:33:10.092109 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.092117 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.092124 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.092131 | orchestrator | 2025-03-23 13:33:10.092142 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 13:33:10.092149 | orchestrator | Sunday 23 March 2025 13:30:48 +0000 (0:00:03.178) 0:06:38.523 ********** 2025-03-23 13:33:10.092156 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.092163 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.092170 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.092177 | orchestrator | 2025-03-23 13:33:10.092183 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-23 13:33:10.092190 | orchestrator | Sunday 23 March 2025 13:30:53 +0000 (0:00:04.382) 0:06:42.906 ********** 2025-03-23 13:33:10.092198 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-23 13:33:10.092205 | orchestrator | 2025-03-23 13:33:10.092212 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-23 13:33:10.092219 | orchestrator | Sunday 23 March 2025 13:30:54 +0000 (0:00:01.696) 0:06:44.603 ********** 2025-03-23 13:33:10.092226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092233 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092248 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092279 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.092286 | orchestrator | 2025-03-23 13:33:10.092293 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-23 13:33:10.092300 | orchestrator | Sunday 23 March 2025 13:30:57 +0000 (0:00:02.310) 0:06:46.913 ********** 2025-03-23 13:33:10.092307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092315 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092338 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 13:33:10.092353 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.092360 | orchestrator | 2025-03-23 13:33:10.092367 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-23 13:33:10.092374 | orchestrator | Sunday 23 March 2025 13:30:58 +0000 (0:00:01.706) 0:06:48.620 ********** 2025-03-23 13:33:10.092380 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092387 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092394 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.092401 | orchestrator | 2025-03-23 13:33:10.092408 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 13:33:10.092415 | orchestrator | Sunday 23 March 2025 13:31:01 +0000 (0:00:02.503) 0:06:51.123 ********** 2025-03-23 13:33:10.092422 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.092429 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.092436 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.092443 | orchestrator | 2025-03-23 13:33:10.092450 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 13:33:10.092460 | orchestrator | Sunday 23 March 2025 13:31:04 +0000 (0:00:02.902) 0:06:54.025 ********** 2025-03-23 13:33:10.092467 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.092474 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.092481 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.092488 | orchestrator | 2025-03-23 13:33:10.092495 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-23 13:33:10.092501 | orchestrator | Sunday 23 March 2025 13:31:08 +0000 (0:00:03.892) 0:06:57.918 ********** 2025-03-23 13:33:10.092508 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.092515 | orchestrator | 2025-03-23 13:33:10.092522 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-23 13:33:10.092529 | orchestrator | Sunday 23 March 2025 13:31:09 +0000 (0:00:01.931) 0:06:59.849 ********** 2025-03-23 13:33:10.092552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.092561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.092640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.092648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.092707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.092714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.092771 | orchestrator | 2025-03-23 13:33:10.092779 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-23 13:33:10.092786 | orchestrator | Sunday 23 March 2025 13:31:14 +0000 (0:00:04.896) 0:07:04.745 ********** 2025-03-23 13:33:10.092793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.092800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.092908 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.092916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.092923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.092945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.092956 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.092979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.092988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 13:33:10.092995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.093003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 13:33:10.093010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:33:10.093017 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093024 | orchestrator | 2025-03-23 13:33:10.093031 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-23 13:33:10.093039 | orchestrator | Sunday 23 March 2025 13:31:16 +0000 (0:00:01.226) 0:07:05.972 ********** 2025-03-23 13:33:10.093046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093066 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.093073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093087 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.093110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 13:33:10.093125 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093132 | orchestrator | 2025-03-23 13:33:10.093139 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-23 13:33:10.093145 | orchestrator | Sunday 23 March 2025 13:31:17 +0000 (0:00:01.660) 0:07:07.633 ********** 2025-03-23 13:33:10.093152 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.093158 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.093164 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.093170 | orchestrator | 2025-03-23 13:33:10.093177 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-23 13:33:10.093183 | orchestrator | Sunday 23 March 2025 13:31:19 +0000 (0:00:01.638) 0:07:09.271 ********** 2025-03-23 13:33:10.093189 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.093195 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.093201 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.093207 | orchestrator | 2025-03-23 13:33:10.093214 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-23 13:33:10.093220 | orchestrator | Sunday 23 March 2025 13:31:22 +0000 (0:00:03.223) 0:07:12.494 ********** 2025-03-23 13:33:10.093226 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.093232 | orchestrator | 2025-03-23 13:33:10.093238 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-23 13:33:10.093245 | orchestrator | Sunday 23 March 2025 13:31:24 +0000 (0:00:02.006) 0:07:14.501 ********** 2025-03-23 13:33:10.093251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:33:10.093258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:33:10.093268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:33:10.093288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:33:10.093296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:33:10.093303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:33:10.093313 | orchestrator | 2025-03-23 13:33:10.093320 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-23 13:33:10.093326 | orchestrator | Sunday 23 March 2025 13:31:31 +0000 (0:00:07.257) 0:07:21.759 ********** 2025-03-23 13:33:10.093346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:33:10.093354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:33:10.093361 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.093367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:33:10.093374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:33:10.093384 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.093390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:33:10.093411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:33:10.093418 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093425 | orchestrator | 2025-03-23 13:33:10.093431 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-23 13:33:10.093438 | orchestrator | Sunday 23 March 2025 13:31:32 +0000 (0:00:01.020) 0:07:22.779 ********** 2025-03-23 13:33:10.093444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 13:33:10.093450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093466 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.093473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 13:33:10.093479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093491 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.093500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 13:33:10.093506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 13:33:10.093519 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093525 | orchestrator | 2025-03-23 13:33:10.093532 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-23 13:33:10.093538 | orchestrator | Sunday 23 March 2025 13:31:34 +0000 (0:00:01.706) 0:07:24.486 ********** 2025-03-23 13:33:10.093544 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.093550 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.093556 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093562 | orchestrator | 2025-03-23 13:33:10.093568 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-23 13:33:10.093574 | orchestrator | Sunday 23 March 2025 13:31:35 +0000 (0:00:00.516) 0:07:25.002 ********** 2025-03-23 13:33:10.093593 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.093600 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.093606 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.093612 | orchestrator | 2025-03-23 13:33:10.093618 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-23 13:33:10.093624 | orchestrator | Sunday 23 March 2025 13:31:37 +0000 (0:00:02.040) 0:07:27.043 ********** 2025-03-23 13:33:10.093644 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.093652 | orchestrator | 2025-03-23 13:33:10.093658 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-23 13:33:10.093664 | orchestrator | Sunday 23 March 2025 13:31:39 +0000 (0:00:02.162) 0:07:29.206 ********** 2025-03-23 13:33:10.093671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:33:10.093681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.093688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:33:10.093731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.093739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:33:10.093756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.093769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:33:10.093821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.093827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:33:10.093879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.093886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:33:10.093925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.093931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.093954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.093964 | orchestrator | 2025-03-23 13:33:10.093970 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-23 13:33:10.093977 | orchestrator | Sunday 23 March 2025 13:31:45 +0000 (0:00:05.866) 0:07:35.072 ********** 2025-03-23 13:33:10.093983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:33:10.093990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.093996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:33:10.094059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.094066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094092 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:33:10.094112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.094118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:33:10.094145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:33:10.094159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:33:10.094166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.094173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:33:10.094236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:33:10.094249 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:33:10.094281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:33:10.094287 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094293 | orchestrator | 2025-03-23 13:33:10.094300 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-23 13:33:10.094306 | orchestrator | Sunday 23 March 2025 13:31:46 +0000 (0:00:01.372) 0:07:36.445 ********** 2025-03-23 13:33:10.094312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094339 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094381 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 13:33:10.094396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 13:33:10.094411 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094420 | orchestrator | 2025-03-23 13:33:10.094427 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-23 13:33:10.094433 | orchestrator | Sunday 23 March 2025 13:31:48 +0000 (0:00:01.755) 0:07:38.200 ********** 2025-03-23 13:33:10.094439 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094446 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094452 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094458 | orchestrator | 2025-03-23 13:33:10.094464 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-23 13:33:10.094471 | orchestrator | Sunday 23 March 2025 13:31:49 +0000 (0:00:00.879) 0:07:39.080 ********** 2025-03-23 13:33:10.094477 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094483 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094489 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094495 | orchestrator | 2025-03-23 13:33:10.094502 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-23 13:33:10.094508 | orchestrator | Sunday 23 March 2025 13:31:51 +0000 (0:00:02.359) 0:07:41.439 ********** 2025-03-23 13:33:10.094514 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.094520 | orchestrator | 2025-03-23 13:33:10.094527 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-23 13:33:10.094533 | orchestrator | Sunday 23 March 2025 13:31:53 +0000 (0:00:02.163) 0:07:43.602 ********** 2025-03-23 13:33:10.094544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:33:10.094555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:33:10.094565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 13:33:10.094587 | orchestrator | 2025-03-23 13:33:10.094594 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-23 13:33:10.094600 | orchestrator | Sunday 23 March 2025 13:31:57 +0000 (0:00:03.510) 0:07:47.113 ********** 2025-03-23 13:33:10.094606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 13:33:10.094613 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 13:33:10.094630 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 13:33:10.094642 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094649 | orchestrator | 2025-03-23 13:33:10.094655 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-23 13:33:10.094661 | orchestrator | Sunday 23 March 2025 13:31:57 +0000 (0:00:00.524) 0:07:47.637 ********** 2025-03-23 13:33:10.094667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 13:33:10.094674 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 13:33:10.094686 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 13:33:10.094701 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094707 | orchestrator | 2025-03-23 13:33:10.094714 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-23 13:33:10.094720 | orchestrator | Sunday 23 March 2025 13:31:59 +0000 (0:00:01.374) 0:07:49.012 ********** 2025-03-23 13:33:10.094726 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094732 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094738 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094744 | orchestrator | 2025-03-23 13:33:10.094751 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-23 13:33:10.094757 | orchestrator | Sunday 23 March 2025 13:31:59 +0000 (0:00:00.547) 0:07:49.559 ********** 2025-03-23 13:33:10.094763 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094769 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094775 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094781 | orchestrator | 2025-03-23 13:33:10.094788 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-23 13:33:10.094794 | orchestrator | Sunday 23 March 2025 13:32:01 +0000 (0:00:02.034) 0:07:51.594 ********** 2025-03-23 13:33:10.094805 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:33:10.094811 | orchestrator | 2025-03-23 13:33:10.094818 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-23 13:33:10.094824 | orchestrator | Sunday 23 March 2025 13:32:03 +0000 (0:00:02.118) 0:07:53.712 ********** 2025-03-23 13:33:10.094830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 13:33:10.094882 | orchestrator | 2025-03-23 13:33:10.094888 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-23 13:33:10.094894 | orchestrator | Sunday 23 March 2025 13:32:13 +0000 (0:00:09.921) 0:08:03.634 ********** 2025-03-23 13:33:10.094906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094926 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.094933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094946 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.094956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 13:33:10.094987 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.094994 | orchestrator | 2025-03-23 13:33:10.095000 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-23 13:33:10.095006 | orchestrator | Sunday 23 March 2025 13:32:15 +0000 (0:00:01.782) 0:08:05.416 ********** 2025-03-23 13:33:10.095013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095038 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095070 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 13:33:10.095102 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095108 | orchestrator | 2025-03-23 13:33:10.095114 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-23 13:33:10.095121 | orchestrator | Sunday 23 March 2025 13:32:17 +0000 (0:00:01.625) 0:08:07.042 ********** 2025-03-23 13:33:10.095127 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.095133 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.095139 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.095149 | orchestrator | 2025-03-23 13:33:10.095155 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-23 13:33:10.095164 | orchestrator | Sunday 23 March 2025 13:32:18 +0000 (0:00:01.691) 0:08:08.733 ********** 2025-03-23 13:33:10.095170 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.095176 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.095182 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.095188 | orchestrator | 2025-03-23 13:33:10.095195 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-23 13:33:10.095201 | orchestrator | Sunday 23 March 2025 13:32:21 +0000 (0:00:02.881) 0:08:11.615 ********** 2025-03-23 13:33:10.095207 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095213 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095222 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095228 | orchestrator | 2025-03-23 13:33:10.095234 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-23 13:33:10.095241 | orchestrator | Sunday 23 March 2025 13:32:22 +0000 (0:00:00.655) 0:08:12.270 ********** 2025-03-23 13:33:10.095247 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095253 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095259 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095265 | orchestrator | 2025-03-23 13:33:10.095272 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-23 13:33:10.095278 | orchestrator | Sunday 23 March 2025 13:32:22 +0000 (0:00:00.328) 0:08:12.598 ********** 2025-03-23 13:33:10.095284 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095290 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095296 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095302 | orchestrator | 2025-03-23 13:33:10.095309 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-23 13:33:10.095315 | orchestrator | Sunday 23 March 2025 13:32:23 +0000 (0:00:00.605) 0:08:13.204 ********** 2025-03-23 13:33:10.095321 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095327 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095333 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095339 | orchestrator | 2025-03-23 13:33:10.095346 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-23 13:33:10.095352 | orchestrator | Sunday 23 March 2025 13:32:23 +0000 (0:00:00.608) 0:08:13.812 ********** 2025-03-23 13:33:10.095358 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095364 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095370 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095377 | orchestrator | 2025-03-23 13:33:10.095383 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-23 13:33:10.095389 | orchestrator | Sunday 23 March 2025 13:32:24 +0000 (0:00:00.355) 0:08:14.168 ********** 2025-03-23 13:33:10.095395 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095401 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095408 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095414 | orchestrator | 2025-03-23 13:33:10.095420 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-23 13:33:10.095426 | orchestrator | Sunday 23 March 2025 13:32:25 +0000 (0:00:01.181) 0:08:15.350 ********** 2025-03-23 13:33:10.095432 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095439 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095445 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095451 | orchestrator | 2025-03-23 13:33:10.095458 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-23 13:33:10.095464 | orchestrator | Sunday 23 March 2025 13:32:26 +0000 (0:00:01.006) 0:08:16.356 ********** 2025-03-23 13:33:10.095470 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095480 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095487 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095493 | orchestrator | 2025-03-23 13:33:10.095499 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-23 13:33:10.095509 | orchestrator | Sunday 23 March 2025 13:32:26 +0000 (0:00:00.345) 0:08:16.701 ********** 2025-03-23 13:33:10.095515 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095521 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095528 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095534 | orchestrator | 2025-03-23 13:33:10.095540 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-23 13:33:10.095546 | orchestrator | Sunday 23 March 2025 13:32:28 +0000 (0:00:01.443) 0:08:18.145 ********** 2025-03-23 13:33:10.095552 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095558 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095564 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095571 | orchestrator | 2025-03-23 13:33:10.095590 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-23 13:33:10.095597 | orchestrator | Sunday 23 March 2025 13:32:29 +0000 (0:00:01.355) 0:08:19.500 ********** 2025-03-23 13:33:10.095603 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095610 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095616 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095622 | orchestrator | 2025-03-23 13:33:10.095628 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-23 13:33:10.095635 | orchestrator | Sunday 23 March 2025 13:32:30 +0000 (0:00:01.309) 0:08:20.810 ********** 2025-03-23 13:33:10.095641 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.095647 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.095653 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.095659 | orchestrator | 2025-03-23 13:33:10.095666 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-23 13:33:10.095672 | orchestrator | Sunday 23 March 2025 13:32:36 +0000 (0:00:05.473) 0:08:26.283 ********** 2025-03-23 13:33:10.095678 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095684 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095691 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095697 | orchestrator | 2025-03-23 13:33:10.095703 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-23 13:33:10.095709 | orchestrator | Sunday 23 March 2025 13:32:39 +0000 (0:00:03.263) 0:08:29.546 ********** 2025-03-23 13:33:10.095716 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.095722 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.095728 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.095734 | orchestrator | 2025-03-23 13:33:10.095741 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-03-23 13:33:10.095747 | orchestrator | Sunday 23 March 2025 13:32:47 +0000 (0:00:07.504) 0:08:37.051 ********** 2025-03-23 13:33:10.095753 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.095759 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.095765 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.095771 | orchestrator | 2025-03-23 13:33:10.095778 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-23 13:33:10.095786 | orchestrator | Sunday 23 March 2025 13:32:50 +0000 (0:00:03.779) 0:08:40.830 ********** 2025-03-23 13:33:10.095793 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:33:10.095799 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:33:10.095805 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:33:10.095811 | orchestrator | 2025-03-23 13:33:10.095820 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-23 13:33:10.095826 | orchestrator | Sunday 23 March 2025 13:33:00 +0000 (0:00:09.873) 0:08:50.703 ********** 2025-03-23 13:33:10.095832 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095839 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095845 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095851 | orchestrator | 2025-03-23 13:33:10.095857 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-23 13:33:10.095864 | orchestrator | Sunday 23 March 2025 13:33:01 +0000 (0:00:00.668) 0:08:51.372 ********** 2025-03-23 13:33:10.095873 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095880 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095886 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095892 | orchestrator | 2025-03-23 13:33:10.095898 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-23 13:33:10.095905 | orchestrator | Sunday 23 March 2025 13:33:02 +0000 (0:00:00.669) 0:08:52.042 ********** 2025-03-23 13:33:10.095911 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095917 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095923 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095929 | orchestrator | 2025-03-23 13:33:10.095936 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-23 13:33:10.095942 | orchestrator | Sunday 23 March 2025 13:33:02 +0000 (0:00:00.398) 0:08:52.440 ********** 2025-03-23 13:33:10.095948 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095954 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095960 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.095966 | orchestrator | 2025-03-23 13:33:10.095973 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-23 13:33:10.095979 | orchestrator | Sunday 23 March 2025 13:33:03 +0000 (0:00:00.670) 0:08:53.111 ********** 2025-03-23 13:33:10.095985 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.095991 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.095997 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.096003 | orchestrator | 2025-03-23 13:33:10.096010 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-23 13:33:10.096016 | orchestrator | Sunday 23 March 2025 13:33:03 +0000 (0:00:00.668) 0:08:53.780 ********** 2025-03-23 13:33:10.096022 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:33:10.096028 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:33:10.096034 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:33:10.096041 | orchestrator | 2025-03-23 13:33:10.096047 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-23 13:33:10.096053 | orchestrator | Sunday 23 March 2025 13:33:04 +0000 (0:00:00.855) 0:08:54.635 ********** 2025-03-23 13:33:10.096059 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.096065 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.096071 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.096078 | orchestrator | 2025-03-23 13:33:10.096084 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-23 13:33:10.096090 | orchestrator | Sunday 23 March 2025 13:33:05 +0000 (0:00:01.021) 0:08:55.657 ********** 2025-03-23 13:33:10.096096 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:33:10.096102 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:33:10.096109 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:33:10.096117 | orchestrator | 2025-03-23 13:33:10.096124 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:33:10.096130 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-23 13:33:10.096137 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-23 13:33:10.096143 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-23 13:33:10.096149 | orchestrator | 2025-03-23 13:33:10.096155 | orchestrator | 2025-03-23 13:33:10.096161 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:33:10.096168 | orchestrator | Sunday 23 March 2025 13:33:07 +0000 (0:00:01.336) 0:08:56.993 ********** 2025-03-23 13:33:10.096174 | orchestrator | =============================================================================== 2025-03-23 13:33:10.096180 | orchestrator | haproxy-config : Copying over glance haproxy config -------------------- 11.64s 2025-03-23 13:33:10.096189 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.92s 2025-03-23 13:33:10.096196 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.87s 2025-03-23 13:33:10.096202 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.67s 2025-03-23 13:33:10.096208 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.50s 2025-03-23 13:33:10.096214 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 7.50s 2025-03-23 13:33:10.096220 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.48s 2025-03-23 13:33:10.096226 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.26s 2025-03-23 13:33:10.096232 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.92s 2025-03-23 13:33:10.096238 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 6.89s 2025-03-23 13:33:10.096244 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.78s 2025-03-23 13:33:10.096253 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 6.67s 2025-03-23 13:33:10.096259 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.63s 2025-03-23 13:33:10.096266 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.62s 2025-03-23 13:33:10.096274 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 6.57s 2025-03-23 13:33:13.126927 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.37s 2025-03-23 13:33:13.127042 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.11s 2025-03-23 13:33:13.127061 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.87s 2025-03-23 13:33:13.127076 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.75s 2025-03-23 13:33:13.127090 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.50s 2025-03-23 13:33:13.127105 | orchestrator | 2025-03-23 13:33:10 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:13.127120 | orchestrator | 2025-03-23 13:33:10 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:13.127135 | orchestrator | 2025-03-23 13:33:10 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:13.127149 | orchestrator | 2025-03-23 13:33:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:13.127200 | orchestrator | 2025-03-23 13:33:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:13.127541 | orchestrator | 2025-03-23 13:33:13 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:13.128384 | orchestrator | 2025-03-23 13:33:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:13.129670 | orchestrator | 2025-03-23 13:33:13 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:16.169921 | orchestrator | 2025-03-23 13:33:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:16.170087 | orchestrator | 2025-03-23 13:33:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:16.172489 | orchestrator | 2025-03-23 13:33:16 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:16.175020 | orchestrator | 2025-03-23 13:33:16 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:16.177545 | orchestrator | 2025-03-23 13:33:16 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:16.178250 | orchestrator | 2025-03-23 13:33:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:19.226575 | orchestrator | 2025-03-23 13:33:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:19.234736 | orchestrator | 2025-03-23 13:33:19 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:19.235761 | orchestrator | 2025-03-23 13:33:19 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:19.237015 | orchestrator | 2025-03-23 13:33:19 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:22.295395 | orchestrator | 2025-03-23 13:33:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:22.295515 | orchestrator | 2025-03-23 13:33:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:22.295833 | orchestrator | 2025-03-23 13:33:22 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:22.297265 | orchestrator | 2025-03-23 13:33:22 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:22.302350 | orchestrator | 2025-03-23 13:33:22 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:25.344229 | orchestrator | 2025-03-23 13:33:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:25.344328 | orchestrator | 2025-03-23 13:33:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:25.345695 | orchestrator | 2025-03-23 13:33:25 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:25.347918 | orchestrator | 2025-03-23 13:33:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:25.349469 | orchestrator | 2025-03-23 13:33:25 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:25.349639 | orchestrator | 2025-03-23 13:33:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:28.399173 | orchestrator | 2025-03-23 13:33:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:28.400967 | orchestrator | 2025-03-23 13:33:28 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:28.401012 | orchestrator | 2025-03-23 13:33:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:28.403130 | orchestrator | 2025-03-23 13:33:28 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:28.403677 | orchestrator | 2025-03-23 13:33:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:31.468937 | orchestrator | 2025-03-23 13:33:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:31.471077 | orchestrator | 2025-03-23 13:33:31 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:31.472901 | orchestrator | 2025-03-23 13:33:31 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:31.474228 | orchestrator | 2025-03-23 13:33:31 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:34.517057 | orchestrator | 2025-03-23 13:33:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:34.517182 | orchestrator | 2025-03-23 13:33:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:34.519154 | orchestrator | 2025-03-23 13:33:34 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:34.519884 | orchestrator | 2025-03-23 13:33:34 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:34.521038 | orchestrator | 2025-03-23 13:33:34 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:34.521128 | orchestrator | 2025-03-23 13:33:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:37.560772 | orchestrator | 2025-03-23 13:33:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:37.563025 | orchestrator | 2025-03-23 13:33:37 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:37.563067 | orchestrator | 2025-03-23 13:33:37 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:37.563761 | orchestrator | 2025-03-23 13:33:37 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:40.619567 | orchestrator | 2025-03-23 13:33:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:40.619744 | orchestrator | 2025-03-23 13:33:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:40.622952 | orchestrator | 2025-03-23 13:33:40 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:40.623015 | orchestrator | 2025-03-23 13:33:40 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:40.626765 | orchestrator | 2025-03-23 13:33:40 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:43.675222 | orchestrator | 2025-03-23 13:33:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:43.675344 | orchestrator | 2025-03-23 13:33:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:43.675885 | orchestrator | 2025-03-23 13:33:43 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:43.677426 | orchestrator | 2025-03-23 13:33:43 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:43.677881 | orchestrator | 2025-03-23 13:33:43 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:43.677924 | orchestrator | 2025-03-23 13:33:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:46.725932 | orchestrator | 2025-03-23 13:33:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:46.726157 | orchestrator | 2025-03-23 13:33:46 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:46.727619 | orchestrator | 2025-03-23 13:33:46 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:46.728875 | orchestrator | 2025-03-23 13:33:46 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:49.791474 | orchestrator | 2025-03-23 13:33:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:49.791614 | orchestrator | 2025-03-23 13:33:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:49.792489 | orchestrator | 2025-03-23 13:33:49 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:49.794154 | orchestrator | 2025-03-23 13:33:49 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:49.795871 | orchestrator | 2025-03-23 13:33:49 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:52.863303 | orchestrator | 2025-03-23 13:33:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:52.863407 | orchestrator | 2025-03-23 13:33:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:52.867475 | orchestrator | 2025-03-23 13:33:52 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:52.867796 | orchestrator | 2025-03-23 13:33:52 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:52.869991 | orchestrator | 2025-03-23 13:33:52 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:52.870422 | orchestrator | 2025-03-23 13:33:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:55.914878 | orchestrator | 2025-03-23 13:33:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:55.916746 | orchestrator | 2025-03-23 13:33:55 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:55.917989 | orchestrator | 2025-03-23 13:33:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:55.920675 | orchestrator | 2025-03-23 13:33:55 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:33:55.920949 | orchestrator | 2025-03-23 13:33:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:33:58.977625 | orchestrator | 2025-03-23 13:33:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:33:58.980543 | orchestrator | 2025-03-23 13:33:58 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:33:58.981950 | orchestrator | 2025-03-23 13:33:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:33:58.982346 | orchestrator | 2025-03-23 13:33:58 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:02.021091 | orchestrator | 2025-03-23 13:33:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:02.021209 | orchestrator | 2025-03-23 13:34:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:02.021760 | orchestrator | 2025-03-23 13:34:02 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:02.023126 | orchestrator | 2025-03-23 13:34:02 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:02.024501 | orchestrator | 2025-03-23 13:34:02 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:02.024697 | orchestrator | 2025-03-23 13:34:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:05.069083 | orchestrator | 2025-03-23 13:34:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:05.070462 | orchestrator | 2025-03-23 13:34:05 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:05.073153 | orchestrator | 2025-03-23 13:34:05 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:05.073487 | orchestrator | 2025-03-23 13:34:05 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:05.073708 | orchestrator | 2025-03-23 13:34:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:08.118720 | orchestrator | 2025-03-23 13:34:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:08.120835 | orchestrator | 2025-03-23 13:34:08 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:08.123341 | orchestrator | 2025-03-23 13:34:08 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:08.128306 | orchestrator | 2025-03-23 13:34:08 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:11.196442 | orchestrator | 2025-03-23 13:34:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:11.196661 | orchestrator | 2025-03-23 13:34:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:11.197176 | orchestrator | 2025-03-23 13:34:11 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:11.198385 | orchestrator | 2025-03-23 13:34:11 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:11.203291 | orchestrator | 2025-03-23 13:34:11 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:14.249545 | orchestrator | 2025-03-23 13:34:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:14.249706 | orchestrator | 2025-03-23 13:34:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:14.251700 | orchestrator | 2025-03-23 13:34:14 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:14.256746 | orchestrator | 2025-03-23 13:34:14 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:17.293565 | orchestrator | 2025-03-23 13:34:14 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:17.293810 | orchestrator | 2025-03-23 13:34:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:17.293851 | orchestrator | 2025-03-23 13:34:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:17.293960 | orchestrator | 2025-03-23 13:34:17 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:17.293980 | orchestrator | 2025-03-23 13:34:17 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:17.295242 | orchestrator | 2025-03-23 13:34:17 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:20.347129 | orchestrator | 2025-03-23 13:34:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:20.347252 | orchestrator | 2025-03-23 13:34:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:20.347640 | orchestrator | 2025-03-23 13:34:20 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:20.348901 | orchestrator | 2025-03-23 13:34:20 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:20.349866 | orchestrator | 2025-03-23 13:34:20 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:23.410316 | orchestrator | 2025-03-23 13:34:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:23.411160 | orchestrator | 2025-03-23 13:34:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:23.412109 | orchestrator | 2025-03-23 13:34:23 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:23.413137 | orchestrator | 2025-03-23 13:34:23 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:23.414611 | orchestrator | 2025-03-23 13:34:23 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:23.414896 | orchestrator | 2025-03-23 13:34:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:26.462005 | orchestrator | 2025-03-23 13:34:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:26.463708 | orchestrator | 2025-03-23 13:34:26 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:26.466625 | orchestrator | 2025-03-23 13:34:26 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:26.472945 | orchestrator | 2025-03-23 13:34:26 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:29.526711 | orchestrator | 2025-03-23 13:34:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:29.526826 | orchestrator | 2025-03-23 13:34:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:29.529370 | orchestrator | 2025-03-23 13:34:29 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:29.530963 | orchestrator | 2025-03-23 13:34:29 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:29.533077 | orchestrator | 2025-03-23 13:34:29 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:32.582914 | orchestrator | 2025-03-23 13:34:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:32.583026 | orchestrator | 2025-03-23 13:34:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:32.585079 | orchestrator | 2025-03-23 13:34:32 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:32.587103 | orchestrator | 2025-03-23 13:34:32 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:32.589016 | orchestrator | 2025-03-23 13:34:32 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:35.644690 | orchestrator | 2025-03-23 13:34:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:35.644808 | orchestrator | 2025-03-23 13:34:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:35.645749 | orchestrator | 2025-03-23 13:34:35 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:35.647118 | orchestrator | 2025-03-23 13:34:35 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:35.648333 | orchestrator | 2025-03-23 13:34:35 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:38.707489 | orchestrator | 2025-03-23 13:34:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:38.707654 | orchestrator | 2025-03-23 13:34:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:38.710948 | orchestrator | 2025-03-23 13:34:38 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:38.716559 | orchestrator | 2025-03-23 13:34:38 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:38.718499 | orchestrator | 2025-03-23 13:34:38 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:38.719016 | orchestrator | 2025-03-23 13:34:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:41.766274 | orchestrator | 2025-03-23 13:34:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:41.768268 | orchestrator | 2025-03-23 13:34:41 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:41.769337 | orchestrator | 2025-03-23 13:34:41 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:41.770873 | orchestrator | 2025-03-23 13:34:41 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:44.822616 | orchestrator | 2025-03-23 13:34:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:44.822741 | orchestrator | 2025-03-23 13:34:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:44.825807 | orchestrator | 2025-03-23 13:34:44 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:44.826260 | orchestrator | 2025-03-23 13:34:44 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:44.828377 | orchestrator | 2025-03-23 13:34:44 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:44.828408 | orchestrator | 2025-03-23 13:34:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:47.875087 | orchestrator | 2025-03-23 13:34:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:47.878347 | orchestrator | 2025-03-23 13:34:47 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:47.879887 | orchestrator | 2025-03-23 13:34:47 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:47.882272 | orchestrator | 2025-03-23 13:34:47 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:50.948477 | orchestrator | 2025-03-23 13:34:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:50.948641 | orchestrator | 2025-03-23 13:34:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:50.950121 | orchestrator | 2025-03-23 13:34:50 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:50.950973 | orchestrator | 2025-03-23 13:34:50 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:50.952271 | orchestrator | 2025-03-23 13:34:50 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:54.028640 | orchestrator | 2025-03-23 13:34:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:54.028763 | orchestrator | 2025-03-23 13:34:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:54.032103 | orchestrator | 2025-03-23 13:34:54 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:54.039665 | orchestrator | 2025-03-23 13:34:54 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:54.043984 | orchestrator | 2025-03-23 13:34:54 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:34:57.091235 | orchestrator | 2025-03-23 13:34:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:34:57.091343 | orchestrator | 2025-03-23 13:34:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:34:57.092544 | orchestrator | 2025-03-23 13:34:57 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:34:57.092581 | orchestrator | 2025-03-23 13:34:57 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:34:57.092946 | orchestrator | 2025-03-23 13:34:57 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:00.132912 | orchestrator | 2025-03-23 13:34:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:00.133030 | orchestrator | 2025-03-23 13:35:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:00.134929 | orchestrator | 2025-03-23 13:35:00 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:00.139512 | orchestrator | 2025-03-23 13:35:00 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:03.180106 | orchestrator | 2025-03-23 13:35:00 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:03.180206 | orchestrator | 2025-03-23 13:35:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:03.180267 | orchestrator | 2025-03-23 13:35:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:03.180781 | orchestrator | 2025-03-23 13:35:03 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:03.181777 | orchestrator | 2025-03-23 13:35:03 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:03.182966 | orchestrator | 2025-03-23 13:35:03 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:03.183139 | orchestrator | 2025-03-23 13:35:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:06.233676 | orchestrator | 2025-03-23 13:35:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:06.234193 | orchestrator | 2025-03-23 13:35:06 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:06.235318 | orchestrator | 2025-03-23 13:35:06 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:06.236385 | orchestrator | 2025-03-23 13:35:06 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:09.287885 | orchestrator | 2025-03-23 13:35:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:09.288017 | orchestrator | 2025-03-23 13:35:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:09.289910 | orchestrator | 2025-03-23 13:35:09 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:09.289948 | orchestrator | 2025-03-23 13:35:09 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:09.291525 | orchestrator | 2025-03-23 13:35:09 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:09.292022 | orchestrator | 2025-03-23 13:35:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:12.342711 | orchestrator | 2025-03-23 13:35:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:12.343459 | orchestrator | 2025-03-23 13:35:12 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:12.346691 | orchestrator | 2025-03-23 13:35:12 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:12.348863 | orchestrator | 2025-03-23 13:35:12 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:12.349111 | orchestrator | 2025-03-23 13:35:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:15.404375 | orchestrator | 2025-03-23 13:35:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:15.405259 | orchestrator | 2025-03-23 13:35:15 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:15.405300 | orchestrator | 2025-03-23 13:35:15 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:15.406086 | orchestrator | 2025-03-23 13:35:15 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:18.456223 | orchestrator | 2025-03-23 13:35:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:18.456351 | orchestrator | 2025-03-23 13:35:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:18.456474 | orchestrator | 2025-03-23 13:35:18 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:18.456503 | orchestrator | 2025-03-23 13:35:18 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:18.459813 | orchestrator | 2025-03-23 13:35:18 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:21.508886 | orchestrator | 2025-03-23 13:35:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:21.509005 | orchestrator | 2025-03-23 13:35:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:21.510787 | orchestrator | 2025-03-23 13:35:21 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:21.510814 | orchestrator | 2025-03-23 13:35:21 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:21.510834 | orchestrator | 2025-03-23 13:35:21 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:24.564998 | orchestrator | 2025-03-23 13:35:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:24.565145 | orchestrator | 2025-03-23 13:35:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:24.567514 | orchestrator | 2025-03-23 13:35:24 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:24.567557 | orchestrator | 2025-03-23 13:35:24 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:27.615517 | orchestrator | 2025-03-23 13:35:24 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state STARTED 2025-03-23 13:35:27.615667 | orchestrator | 2025-03-23 13:35:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:27.615699 | orchestrator | 2025-03-23 13:35:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:27.616508 | orchestrator | 2025-03-23 13:35:27 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:27.618759 | orchestrator | 2025-03-23 13:35:27 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:27.622180 | orchestrator | 2025-03-23 13:35:27.622218 | orchestrator | 2025-03-23 13:35:27.622232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:35:27.622246 | orchestrator | 2025-03-23 13:35:27.622259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:35:27.622273 | orchestrator | Sunday 23 March 2025 13:33:12 +0000 (0:00:00.456) 0:00:00.457 ********** 2025-03-23 13:35:27.622286 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:35:27.622301 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:35:27.622314 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:35:27.622327 | orchestrator | 2025-03-23 13:35:27.622341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:35:27.622354 | orchestrator | Sunday 23 March 2025 13:33:12 +0000 (0:00:00.465) 0:00:00.922 ********** 2025-03-23 13:35:27.622368 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-03-23 13:35:27.622382 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-03-23 13:35:27.622396 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-03-23 13:35:27.622409 | orchestrator | 2025-03-23 13:35:27.622422 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-03-23 13:35:27.622435 | orchestrator | 2025-03-23 13:35:27.622449 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-23 13:35:27.622462 | orchestrator | Sunday 23 March 2025 13:33:12 +0000 (0:00:00.330) 0:00:01.253 ********** 2025-03-23 13:35:27.622476 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:35:27.622489 | orchestrator | 2025-03-23 13:35:27.622503 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-03-23 13:35:27.622516 | orchestrator | Sunday 23 March 2025 13:33:13 +0000 (0:00:00.780) 0:00:02.033 ********** 2025-03-23 13:35:27.622529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:35:27.622567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:35:27.622585 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-23 13:35:27.622599 | orchestrator | 2025-03-23 13:35:27.622637 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-03-23 13:35:27.622651 | orchestrator | Sunday 23 March 2025 13:33:15 +0000 (0:00:01.827) 0:00:03.860 ********** 2025-03-23 13:35:27.622668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.622784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.622798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.622823 | orchestrator | 2025-03-23 13:35:27.622837 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-23 13:35:27.622851 | orchestrator | Sunday 23 March 2025 13:33:17 +0000 (0:00:01.539) 0:00:05.400 ********** 2025-03-23 13:35:27.622864 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:35:27.622878 | orchestrator | 2025-03-23 13:35:27.622892 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-03-23 13:35:27.622905 | orchestrator | Sunday 23 March 2025 13:33:18 +0000 (0:00:00.961) 0:00:06.361 ********** 2025-03-23 13:35:27.622928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.622978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623043 | orchestrator | 2025-03-23 13:35:27.623057 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-03-23 13:35:27.623071 | orchestrator | Sunday 23 March 2025 13:33:22 +0000 (0:00:04.101) 0:00:10.463 ********** 2025-03-23 13:35:27.623085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623122 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:35:27.623144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623177 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:35:27.623191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623226 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:35:27.623238 | orchestrator | 2025-03-23 13:35:27.623251 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-03-23 13:35:27.623267 | orchestrator | Sunday 23 March 2025 13:33:23 +0000 (0:00:01.133) 0:00:11.596 ********** 2025-03-23 13:35:27.623286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623340 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:35:27.623353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623366 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:35:27.623384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 13:35:27.623403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 13:35:27.623416 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:35:27.623428 | orchestrator | 2025-03-23 13:35:27.623441 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-03-23 13:35:27.623453 | orchestrator | Sunday 23 March 2025 13:33:25 +0000 (0:00:01.713) 0:00:13.310 ********** 2025-03-23 13:35:27.623466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623575 | orchestrator | 2025-03-23 13:35:27.623587 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-03-23 13:35:27.623600 | orchestrator | Sunday 23 March 2025 13:33:28 +0000 (0:00:03.406) 0:00:16.716 ********** 2025-03-23 13:35:27.623629 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:27.623642 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:35:27.623654 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:35:27.623666 | orchestrator | 2025-03-23 13:35:27.623678 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-03-23 13:35:27.623691 | orchestrator | Sunday 23 March 2025 13:33:32 +0000 (0:00:03.666) 0:00:20.382 ********** 2025-03-23 13:35:27.623703 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:27.623725 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:35:27.623737 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:35:27.623749 | orchestrator | 2025-03-23 13:35:27.623761 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-03-23 13:35:27.623773 | orchestrator | Sunday 23 March 2025 13:33:34 +0000 (0:00:02.467) 0:00:22.850 ********** 2025-03-23 13:35:27.623793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-03-23 13:35:27 | INFO  | Task 02312e1c-3166-428b-bf93-f7dad58ddd62 is in state SUCCESS 2025-03-23 13:35:27.623808 | orchestrator | 2025-03-23 13:35:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:27.623821 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 13:35:27.623862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 13:35:27.623926 | orchestrator | 2025-03-23 13:35:27.623938 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-23 13:35:27.623951 | orchestrator | Sunday 23 March 2025 13:33:38 +0000 (0:00:03.734) 0:00:26.584 ********** 2025-03-23 13:35:27.623963 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:35:27.623976 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:35:27.623988 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:35:27.624001 | orchestrator | 2025-03-23 13:35:27.624013 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-23 13:35:27.624025 | orchestrator | Sunday 23 March 2025 13:33:38 +0000 (0:00:00.390) 0:00:26.975 ********** 2025-03-23 13:35:27.624037 | orchestrator | 2025-03-23 13:35:27.624050 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-23 13:35:27.624062 | orchestrator | Sunday 23 March 2025 13:33:39 +0000 (0:00:00.500) 0:00:27.475 ********** 2025-03-23 13:35:27.624074 | orchestrator | 2025-03-23 13:35:27.624086 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-23 13:35:27.624098 | orchestrator | Sunday 23 March 2025 13:33:39 +0000 (0:00:00.168) 0:00:27.644 ********** 2025-03-23 13:35:27.624110 | orchestrator | 2025-03-23 13:35:27.624123 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-03-23 13:35:27.624135 | orchestrator | Sunday 23 March 2025 13:33:39 +0000 (0:00:00.145) 0:00:27.790 ********** 2025-03-23 13:35:27.624148 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:35:27.624160 | orchestrator | 2025-03-23 13:35:27.624178 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-03-23 13:35:27.624190 | orchestrator | Sunday 23 March 2025 13:33:39 +0000 (0:00:00.442) 0:00:28.232 ********** 2025-03-23 13:35:27.624202 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:35:27.624214 | orchestrator | 2025-03-23 13:35:27.624226 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-03-23 13:35:27.624239 | orchestrator | Sunday 23 March 2025 13:33:40 +0000 (0:00:00.966) 0:00:29.199 ********** 2025-03-23 13:35:27.624251 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:27.624263 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:35:27.624275 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:35:27.624287 | orchestrator | 2025-03-23 13:35:27.624299 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-03-23 13:35:27.624311 | orchestrator | Sunday 23 March 2025 13:34:10 +0000 (0:00:29.806) 0:00:59.005 ********** 2025-03-23 13:35:27.624323 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:27.624336 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:35:27.624348 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:35:27.624360 | orchestrator | 2025-03-23 13:35:27.624372 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-23 13:35:27.624384 | orchestrator | Sunday 23 March 2025 13:35:11 +0000 (0:01:00.423) 0:01:59.429 ********** 2025-03-23 13:35:27.624396 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:35:27.624408 | orchestrator | 2025-03-23 13:35:27.624420 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-03-23 13:35:27.624432 | orchestrator | Sunday 23 March 2025 13:35:12 +0000 (0:00:01.047) 0:02:00.476 ********** 2025-03-23 13:35:27.624444 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:35:27.624456 | orchestrator | 2025-03-23 13:35:27.624468 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-03-23 13:35:27.624480 | orchestrator | Sunday 23 March 2025 13:35:15 +0000 (0:00:03.247) 0:02:03.724 ********** 2025-03-23 13:35:27.624492 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:35:27.624505 | orchestrator | 2025-03-23 13:35:27.624517 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-03-23 13:35:27.624533 | orchestrator | Sunday 23 March 2025 13:35:18 +0000 (0:00:02.850) 0:02:06.575 ********** 2025-03-23 13:35:27.624546 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:27.624559 | orchestrator | 2025-03-23 13:35:27.624576 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-03-23 13:35:30.671484 | orchestrator | Sunday 23 March 2025 13:35:21 +0000 (0:00:03.211) 0:02:09.786 ********** 2025-03-23 13:35:30.671601 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:35:30.671654 | orchestrator | 2025-03-23 13:35:30.671670 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:35:30.671685 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:35:30.671701 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:35:30.671715 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 13:35:30.671729 | orchestrator | 2025-03-23 13:35:30.671743 | orchestrator | 2025-03-23 13:35:30.671758 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:35:30.671772 | orchestrator | Sunday 23 March 2025 13:35:24 +0000 (0:00:03.306) 0:02:13.093 ********** 2025-03-23 13:35:30.671786 | orchestrator | =============================================================================== 2025-03-23 13:35:30.671800 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 60.42s 2025-03-23 13:35:30.671813 | orchestrator | opensearch : Restart opensearch container ------------------------------ 29.81s 2025-03-23 13:35:30.671854 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 4.10s 2025-03-23 13:35:30.671868 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.73s 2025-03-23 13:35:30.671882 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.66s 2025-03-23 13:35:30.671895 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.41s 2025-03-23 13:35:30.671909 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.31s 2025-03-23 13:35:30.671923 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.25s 2025-03-23 13:35:30.671937 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.21s 2025-03-23 13:35:30.671950 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.85s 2025-03-23 13:35:30.671964 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.47s 2025-03-23 13:35:30.671978 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.83s 2025-03-23 13:35:30.671991 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.71s 2025-03-23 13:35:30.672005 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.54s 2025-03-23 13:35:30.672020 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.13s 2025-03-23 13:35:30.672037 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.05s 2025-03-23 13:35:30.672054 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.97s 2025-03-23 13:35:30.672069 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.96s 2025-03-23 13:35:30.672084 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.81s 2025-03-23 13:35:30.672099 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-03-23 13:35:30.672129 | orchestrator | 2025-03-23 13:35:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:30.673069 | orchestrator | 2025-03-23 13:35:30 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:30.675751 | orchestrator | 2025-03-23 13:35:30 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:33.733043 | orchestrator | 2025-03-23 13:35:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:33.733154 | orchestrator | 2025-03-23 13:35:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:33.733949 | orchestrator | 2025-03-23 13:35:33 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:33.735461 | orchestrator | 2025-03-23 13:35:33 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:36.780442 | orchestrator | 2025-03-23 13:35:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:36.780579 | orchestrator | 2025-03-23 13:35:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:36.781172 | orchestrator | 2025-03-23 13:35:36 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:36.782586 | orchestrator | 2025-03-23 13:35:36 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:39.816051 | orchestrator | 2025-03-23 13:35:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:39.816168 | orchestrator | 2025-03-23 13:35:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:39.816975 | orchestrator | 2025-03-23 13:35:39 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:39.817954 | orchestrator | 2025-03-23 13:35:39 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:39.818220 | orchestrator | 2025-03-23 13:35:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:42.858549 | orchestrator | 2025-03-23 13:35:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:42.860712 | orchestrator | 2025-03-23 13:35:42 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:42.861852 | orchestrator | 2025-03-23 13:35:42 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:45.902229 | orchestrator | 2025-03-23 13:35:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:45.902349 | orchestrator | 2025-03-23 13:35:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:45.903564 | orchestrator | 2025-03-23 13:35:45 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:45.905425 | orchestrator | 2025-03-23 13:35:45 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:48.949404 | orchestrator | 2025-03-23 13:35:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:48.949519 | orchestrator | 2025-03-23 13:35:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:48.950674 | orchestrator | 2025-03-23 13:35:48 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:48.952004 | orchestrator | 2025-03-23 13:35:48 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:51.991375 | orchestrator | 2025-03-23 13:35:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:51.991486 | orchestrator | 2025-03-23 13:35:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:51.992992 | orchestrator | 2025-03-23 13:35:51 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:51.995351 | orchestrator | 2025-03-23 13:35:51 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:55.043969 | orchestrator | 2025-03-23 13:35:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:55.044106 | orchestrator | 2025-03-23 13:35:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:55.045448 | orchestrator | 2025-03-23 13:35:55 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:55.049478 | orchestrator | 2025-03-23 13:35:55 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:58.107937 | orchestrator | 2025-03-23 13:35:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:35:58.108031 | orchestrator | 2025-03-23 13:35:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:35:58.109331 | orchestrator | 2025-03-23 13:35:58 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:35:58.110887 | orchestrator | 2025-03-23 13:35:58 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:35:58.111451 | orchestrator | 2025-03-23 13:35:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:01.158175 | orchestrator | 2025-03-23 13:36:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:01.160342 | orchestrator | 2025-03-23 13:36:01 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:01.163485 | orchestrator | 2025-03-23 13:36:01 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:01.163903 | orchestrator | 2025-03-23 13:36:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:04.228950 | orchestrator | 2025-03-23 13:36:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:04.231885 | orchestrator | 2025-03-23 13:36:04 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:04.234544 | orchestrator | 2025-03-23 13:36:04 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:07.281170 | orchestrator | 2025-03-23 13:36:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:07.281302 | orchestrator | 2025-03-23 13:36:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:07.281561 | orchestrator | 2025-03-23 13:36:07 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:07.282767 | orchestrator | 2025-03-23 13:36:07 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:10.329039 | orchestrator | 2025-03-23 13:36:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:10.329149 | orchestrator | 2025-03-23 13:36:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:10.329952 | orchestrator | 2025-03-23 13:36:10 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:10.331999 | orchestrator | 2025-03-23 13:36:10 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:13.392017 | orchestrator | 2025-03-23 13:36:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:13.392131 | orchestrator | 2025-03-23 13:36:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:13.393025 | orchestrator | 2025-03-23 13:36:13 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:13.395069 | orchestrator | 2025-03-23 13:36:13 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:16.442263 | orchestrator | 2025-03-23 13:36:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:16.442382 | orchestrator | 2025-03-23 13:36:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:16.443236 | orchestrator | 2025-03-23 13:36:16 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:16.444261 | orchestrator | 2025-03-23 13:36:16 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:19.486882 | orchestrator | 2025-03-23 13:36:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:19.487019 | orchestrator | 2025-03-23 13:36:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:19.487725 | orchestrator | 2025-03-23 13:36:19 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:19.491583 | orchestrator | 2025-03-23 13:36:19 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:22.555818 | orchestrator | 2025-03-23 13:36:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:22.555927 | orchestrator | 2025-03-23 13:36:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:22.556959 | orchestrator | 2025-03-23 13:36:22 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:22.557959 | orchestrator | 2025-03-23 13:36:22 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:25.603505 | orchestrator | 2025-03-23 13:36:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:25.603696 | orchestrator | 2025-03-23 13:36:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:25.604830 | orchestrator | 2025-03-23 13:36:25 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:25.606754 | orchestrator | 2025-03-23 13:36:25 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state STARTED 2025-03-23 13:36:25.606968 | orchestrator | 2025-03-23 13:36:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:28.655595 | orchestrator | 2025-03-23 13:36:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:28.658128 | orchestrator | 2025-03-23 13:36:28 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:28.661481 | orchestrator | 2025-03-23 13:36:28 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:28.668419 | orchestrator | 2025-03-23 13:36:28 | INFO  | Task 302dbb0c-0b77-484a-9990-d112d808667b is in state SUCCESS 2025-03-23 13:36:28.670359 | orchestrator | 2025-03-23 13:36:28.670402 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:36:28.670418 | orchestrator | 2025-03-23 13:36:28.670433 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-03-23 13:36:28.670449 | orchestrator | 2025-03-23 13:36:28.670464 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-23 13:36:28.670479 | orchestrator | Sunday 23 March 2025 13:21:33 +0000 (0:00:01.797) 0:00:01.797 ********** 2025-03-23 13:36:28.670495 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.670713 | orchestrator | 2025-03-23 13:36:28.670735 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-23 13:36:28.670762 | orchestrator | Sunday 23 March 2025 13:21:35 +0000 (0:00:01.373) 0:00:03.171 ********** 2025-03-23 13:36:28.670777 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.670792 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-03-23 13:36:28.670805 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-03-23 13:36:28.670819 | orchestrator | 2025-03-23 13:36:28.670833 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-23 13:36:28.670847 | orchestrator | Sunday 23 March 2025 13:21:35 +0000 (0:00:00.833) 0:00:04.004 ********** 2025-03-23 13:36:28.670862 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.670876 | orchestrator | 2025-03-23 13:36:28.670890 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-23 13:36:28.670904 | orchestrator | Sunday 23 March 2025 13:21:37 +0000 (0:00:01.192) 0:00:05.196 ********** 2025-03-23 13:36:28.670917 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.670933 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.670948 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.670963 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.670979 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.670994 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.671435 | orchestrator | 2025-03-23 13:36:28.671452 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-23 13:36:28.671466 | orchestrator | Sunday 23 March 2025 13:21:38 +0000 (0:00:01.652) 0:00:06.849 ********** 2025-03-23 13:36:28.671480 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.671494 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.671508 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.671522 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.671536 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.671574 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.671590 | orchestrator | 2025-03-23 13:36:28.671605 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-23 13:36:28.671644 | orchestrator | Sunday 23 March 2025 13:21:39 +0000 (0:00:01.014) 0:00:07.864 ********** 2025-03-23 13:36:28.671659 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.671674 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.671687 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.671702 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.671716 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.671730 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.671743 | orchestrator | 2025-03-23 13:36:28.671758 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-23 13:36:28.672545 | orchestrator | Sunday 23 March 2025 13:21:40 +0000 (0:00:01.270) 0:00:09.134 ********** 2025-03-23 13:36:28.672565 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.672579 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.672593 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.672643 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.672659 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.672672 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.672687 | orchestrator | 2025-03-23 13:36:28.672701 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-23 13:36:28.672715 | orchestrator | Sunday 23 March 2025 13:21:42 +0000 (0:00:01.327) 0:00:10.462 ********** 2025-03-23 13:36:28.672729 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.672742 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.672756 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.672817 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.672831 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.672845 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.672858 | orchestrator | 2025-03-23 13:36:28.672872 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-23 13:36:28.672887 | orchestrator | Sunday 23 March 2025 13:21:43 +0000 (0:00:00.913) 0:00:11.375 ********** 2025-03-23 13:36:28.672901 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.672915 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.672929 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.672942 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.672956 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.672970 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.672984 | orchestrator | 2025-03-23 13:36:28.672998 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-23 13:36:28.673012 | orchestrator | Sunday 23 March 2025 13:21:44 +0000 (0:00:01.646) 0:00:13.023 ********** 2025-03-23 13:36:28.673026 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.673041 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.673397 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.673415 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.673430 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.673443 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.673457 | orchestrator | 2025-03-23 13:36:28.673471 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-23 13:36:28.673485 | orchestrator | Sunday 23 March 2025 13:21:46 +0000 (0:00:01.151) 0:00:14.174 ********** 2025-03-23 13:36:28.673499 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.673513 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.673527 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.673540 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.673554 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.673568 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.673582 | orchestrator | 2025-03-23 13:36:28.673722 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-23 13:36:28.673746 | orchestrator | Sunday 23 March 2025 13:21:47 +0000 (0:00:01.131) 0:00:15.305 ********** 2025-03-23 13:36:28.673760 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.673789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.673803 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.673817 | orchestrator | 2025-03-23 13:36:28.673831 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-23 13:36:28.673845 | orchestrator | Sunday 23 March 2025 13:21:48 +0000 (0:00:01.324) 0:00:16.630 ********** 2025-03-23 13:36:28.673858 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.673873 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.673886 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.673900 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.673914 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.673928 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.673985 | orchestrator | 2025-03-23 13:36:28.674001 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-23 13:36:28.674069 | orchestrator | Sunday 23 March 2025 13:21:50 +0000 (0:00:02.160) 0:00:18.791 ********** 2025-03-23 13:36:28.674089 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.674103 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.674117 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.674130 | orchestrator | 2025-03-23 13:36:28.674144 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-23 13:36:28.674158 | orchestrator | Sunday 23 March 2025 13:21:54 +0000 (0:00:03.382) 0:00:22.173 ********** 2025-03-23 13:36:28.674172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.674186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.674200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.674213 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.674228 | orchestrator | 2025-03-23 13:36:28.674242 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-23 13:36:28.674263 | orchestrator | Sunday 23 March 2025 13:21:54 +0000 (0:00:00.714) 0:00:22.888 ********** 2025-03-23 13:36:28.674278 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674388 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674742 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.674760 | orchestrator | 2025-03-23 13:36:28.674774 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-23 13:36:28.674788 | orchestrator | Sunday 23 March 2025 13:21:56 +0000 (0:00:01.733) 0:00:24.621 ********** 2025-03-23 13:36:28.674803 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674818 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674845 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.674859 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.674873 | orchestrator | 2025-03-23 13:36:28.674887 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-23 13:36:28.674985 | orchestrator | Sunday 23 March 2025 13:21:56 +0000 (0:00:00.381) 0:00:25.002 ********** 2025-03-23 13:36:28.675010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-23 13:21:51.644545', 'end': '2025-03-23 13:21:51.934128', 'delta': '0:00:00.289583', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.675028 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-23 13:21:52.653791', 'end': '2025-03-23 13:21:52.913673', 'delta': '0:00:00.259882', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.675044 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-23 13:21:53.557395', 'end': '2025-03-23 13:21:53.824919', 'delta': '0:00:00.267524', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-23 13:36:28.675058 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.675073 | orchestrator | 2025-03-23 13:36:28.675087 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-23 13:36:28.675100 | orchestrator | Sunday 23 March 2025 13:21:57 +0000 (0:00:00.407) 0:00:25.410 ********** 2025-03-23 13:36:28.675114 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.675129 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.675143 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.675228 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.675243 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.675257 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.675271 | orchestrator | 2025-03-23 13:36:28.675286 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-23 13:36:28.675301 | orchestrator | Sunday 23 March 2025 13:21:59 +0000 (0:00:02.567) 0:00:27.977 ********** 2025-03-23 13:36:28.675387 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.675406 | orchestrator | 2025-03-23 13:36:28.675422 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-23 13:36:28.675920 | orchestrator | Sunday 23 March 2025 13:22:01 +0000 (0:00:01.837) 0:00:29.815 ********** 2025-03-23 13:36:28.675940 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.675955 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.675968 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.675982 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.675996 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.676010 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.676023 | orchestrator | 2025-03-23 13:36:28.676037 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-23 13:36:28.676051 | orchestrator | Sunday 23 March 2025 13:22:04 +0000 (0:00:02.385) 0:00:32.201 ********** 2025-03-23 13:36:28.676065 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.676086 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.676099 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.676113 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.676127 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.676272 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.676292 | orchestrator | 2025-03-23 13:36:28.676306 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:36:28.677053 | orchestrator | Sunday 23 March 2025 13:22:05 +0000 (0:00:01.197) 0:00:33.399 ********** 2025-03-23 13:36:28.677079 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677091 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.677104 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.677116 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.677129 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.677141 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.677502 | orchestrator | 2025-03-23 13:36:28.677516 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-23 13:36:28.677529 | orchestrator | Sunday 23 March 2025 13:22:06 +0000 (0:00:01.104) 0:00:34.503 ********** 2025-03-23 13:36:28.677639 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677657 | orchestrator | 2025-03-23 13:36:28.677670 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-23 13:36:28.677683 | orchestrator | Sunday 23 March 2025 13:22:06 +0000 (0:00:00.174) 0:00:34.677 ********** 2025-03-23 13:36:28.677695 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677708 | orchestrator | 2025-03-23 13:36:28.677720 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:36:28.677732 | orchestrator | Sunday 23 March 2025 13:22:06 +0000 (0:00:00.304) 0:00:34.982 ********** 2025-03-23 13:36:28.677745 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677757 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.677769 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.677781 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.677794 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.677806 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.677818 | orchestrator | 2025-03-23 13:36:28.677830 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-23 13:36:28.677843 | orchestrator | Sunday 23 March 2025 13:22:07 +0000 (0:00:01.123) 0:00:36.105 ********** 2025-03-23 13:36:28.677855 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677867 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.677879 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.677892 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.677904 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.677916 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.677928 | orchestrator | 2025-03-23 13:36:28.677940 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-23 13:36:28.677966 | orchestrator | Sunday 23 March 2025 13:22:09 +0000 (0:00:01.257) 0:00:37.363 ********** 2025-03-23 13:36:28.677979 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.677991 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.678003 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.678080 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.678096 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.678108 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.678120 | orchestrator | 2025-03-23 13:36:28.678133 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-23 13:36:28.678145 | orchestrator | Sunday 23 March 2025 13:22:10 +0000 (0:00:00.987) 0:00:38.351 ********** 2025-03-23 13:36:28.678194 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.678207 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.678220 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.678232 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.678246 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.678259 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.678272 | orchestrator | 2025-03-23 13:36:28.678286 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-23 13:36:28.678300 | orchestrator | Sunday 23 March 2025 13:22:11 +0000 (0:00:01.266) 0:00:39.617 ********** 2025-03-23 13:36:28.678313 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.678327 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.678340 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.678353 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.678367 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.678381 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.678394 | orchestrator | 2025-03-23 13:36:28.678408 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-23 13:36:28.678422 | orchestrator | Sunday 23 March 2025 13:22:12 +0000 (0:00:01.131) 0:00:40.749 ********** 2025-03-23 13:36:28.678435 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.678448 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.678462 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.678475 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.678489 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.678502 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.678516 | orchestrator | 2025-03-23 13:36:28.678537 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-23 13:36:28.678552 | orchestrator | Sunday 23 March 2025 13:22:14 +0000 (0:00:01.665) 0:00:42.415 ********** 2025-03-23 13:36:28.678565 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.678579 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.678593 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.678670 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.678687 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.678700 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.678712 | orchestrator | 2025-03-23 13:36:28.678725 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-23 13:36:28.678737 | orchestrator | Sunday 23 March 2025 13:22:15 +0000 (0:00:01.389) 0:00:43.805 ********** 2025-03-23 13:36:28.678751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.678952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a7b2e0f-187f-479d-baca-3c89b3a54e1f', 'scsi-SQEMU_QEMU_HARDDISK_2a7b2e0f-187f-479d-baca-3c89b3a54e1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98af1d1a-144c-4faa-87cf-25faeb3fb806', 'scsi-SQEMU_QEMU_HARDDISK_98af1d1a-144c-4faa-87cf-25faeb3fb806'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d22c86c-8b28-4de1-9381-02b0bcd9097d', 'scsi-SQEMU_QEMU_HARDDISK_6d22c86c-8b28-4de1-9381-02b0bcd9097d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679091 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.679101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750', 'scsi-SQEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part1', 'scsi-SQEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part14', 'scsi-SQEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part15', 'scsi-SQEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part16', 'scsi-SQEMU_QEMU_HARDDISK_43d9102f-90c9-419e-a677-363110a73750-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93f5d818-3956-4b7b-8e36-fec820f5f0d8', 'scsi-SQEMU_QEMU_HARDDISK_93f5d818-3956-4b7b-8e36-fec820f5f0d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1784947b-28d0-43a3-b38e-e79d14638f2f', 'scsi-SQEMU_QEMU_HARDDISK_1784947b-28d0-43a3-b38e-e79d14638f2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2f0237-b6d9-43f0-8b5b-dde81dc603fc', 'scsi-SQEMU_QEMU_HARDDISK_be2f0237-b6d9-43f0-8b5b-dde81dc603fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5', 'scsi-SQEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c3c2b3-a12b-436d-ac05-ef0f7338b1b5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3062c13-e6b5-492c-acee-00491b2788e1', 'scsi-SQEMU_QEMU_HARDDISK_a3062c13-e6b5-492c-acee-00491b2788e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_de771d17-f0b5-4049-b48c-6cbd0f44ea02', 'scsi-SQEMU_QEMU_HARDDISK_de771d17-f0b5-4049-b48c-6cbd0f44ea02'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94c1a16d-83bf-4bb0-903c-e73c9d50c029', 'scsi-SQEMU_QEMU_HARDDISK_94c1a16d-83bf-4bb0-903c-e73c9d50c029'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679673 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.679683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8229b7a0--df8d--5815--8245--22e3d24081aa-osd--block--8229b7a0--df8d--5815--8245--22e3d24081aa', 'dm-uuid-LVM-Z48ckVyGrsEeeM12MXfzlAr80MqHOespAGApPAmB7UHP51wAby8gktMaL6KtU0Hl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ab6ed36--da2c--5faf--8aed--224e80357d25-osd--block--0ab6ed36--da2c--5faf--8aed--224e80357d25', 'dm-uuid-LVM-zYaqQ23yxO7oCX7AViyErFqgtgm1yBm7L9P0v3gzBf1hcDGQ6eBC5dhWbbHjuIKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679827 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.679838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.679965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part1', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part14', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part15', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part16', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8229b7a0--df8d--5815--8245--22e3d24081aa-osd--block--8229b7a0--df8d--5815--8245--22e3d24081aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X1HqtA-mY6g-cTGC-a6FL-CEW3-JT27-9th0tU', 'scsi-0QEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754', 'scsi-SQEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.679995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0ab6ed36--da2c--5faf--8aed--224e80357d25-osd--block--0ab6ed36--da2c--5faf--8aed--224e80357d25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FZYVEW-6g1r-vH9N-I6jR-a7bf-2KeT-uyr6IJ', 'scsi-0QEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b', 'scsi-SQEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992', 'scsi-SQEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233-osd--block--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233', 'dm-uuid-LVM-zFH3MJEtAsPE2iavoT5XYn5YfZ3YqLSGcmGfiJi1IHocp1MjXSfepodyJQ5KjueO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb-osd--block--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb', 'dm-uuid-LVM-V3IcR6k6ADm7uboWm9b3H9L00Lf56OvneHKsqEh9vhDPYb4hIlfo8AalFIUo9etO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680167 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.680177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part1', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part14', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part15', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part16', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233-osd--block--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YA4ubi-Sdu3-XPZh-H0Ab-au4v-HCqU-auA1rk', 'scsi-0QEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6', 'scsi-SQEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb-osd--block--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BOFdCw-FwIk-Rtak-6zoR-95RH-RmmV-m5b3wM', 'scsi-0QEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd', 'scsi-SQEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9205bfbb--9f4f--501b--85a3--60f418fff160-osd--block--9205bfbb--9f4f--501b--85a3--60f418fff160', 'dm-uuid-LVM-hCtV4JXP0S36MwrQytf7UEoTu7ekt75BjonYE2XGZ5VNBQXOgGCeY733w01OQ2la'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5', 'scsi-SQEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680408 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.680418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a8506d3--5e74--5dde--8df3--17f522800900-osd--block--5a8506d3--5e74--5dde--8df3--17f522800900', 'dm-uuid-LVM-kSzZTeeRafQzJOtfEIdDvkZxGp8wDkCEcUcFC1jtiwTvO1VkDPpkRpE0XZtCm87M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:36:28.680595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part1', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part14', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part15', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part16', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9205bfbb--9f4f--501b--85a3--60f418fff160-osd--block--9205bfbb--9f4f--501b--85a3--60f418fff160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1sO0lN-s91F-duyc-sh8W-xPDu-3cl6-erbJKd', 'scsi-0QEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d', 'scsi-SQEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a8506d3--5e74--5dde--8df3--17f522800900-osd--block--5a8506d3--5e74--5dde--8df3--17f522800900'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ip3cp0-9I4g-dskc-D0U7-g73E-3FDn-ZxK0KN', 'scsi-0QEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9', 'scsi-SQEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5', 'scsi-SQEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:36:28.680731 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.680741 | orchestrator | 2025-03-23 13:36:28.680751 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-23 13:36:28.680762 | orchestrator | Sunday 23 March 2025 13:22:18 +0000 (0:00:03.318) 0:00:47.123 ********** 2025-03-23 13:36:28.680772 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.680782 | orchestrator | 2025-03-23 13:36:28.680792 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-23 13:36:28.680802 | orchestrator | Sunday 23 March 2025 13:22:19 +0000 (0:00:00.562) 0:00:47.685 ********** 2025-03-23 13:36:28.680812 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.680822 | orchestrator | 2025-03-23 13:36:28.680844 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-23 13:36:28.680856 | orchestrator | Sunday 23 March 2025 13:22:19 +0000 (0:00:00.343) 0:00:48.029 ********** 2025-03-23 13:36:28.680866 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.680877 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.680888 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.680898 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.680909 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.680919 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.680930 | orchestrator | 2025-03-23 13:36:28.680941 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-23 13:36:28.680951 | orchestrator | Sunday 23 March 2025 13:22:21 +0000 (0:00:01.273) 0:00:49.303 ********** 2025-03-23 13:36:28.680962 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.680973 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.680984 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.680995 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.681005 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.681016 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.681026 | orchestrator | 2025-03-23 13:36:28.681037 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-23 13:36:28.681048 | orchestrator | Sunday 23 March 2025 13:22:22 +0000 (0:00:01.448) 0:00:50.752 ********** 2025-03-23 13:36:28.681059 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.681069 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.681080 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.681090 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.681101 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.681112 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.681123 | orchestrator | 2025-03-23 13:36:28.681133 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:36:28.681144 | orchestrator | Sunday 23 March 2025 13:22:23 +0000 (0:00:00.894) 0:00:51.646 ********** 2025-03-23 13:36:28.681155 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681166 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.681176 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.681187 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.681203 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.681265 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.681280 | orchestrator | 2025-03-23 13:36:28.681291 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:36:28.681302 | orchestrator | Sunday 23 March 2025 13:22:24 +0000 (0:00:01.243) 0:00:52.890 ********** 2025-03-23 13:36:28.681313 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681323 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.681334 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.681345 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.681356 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.681366 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.681377 | orchestrator | 2025-03-23 13:36:28.681387 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:36:28.681398 | orchestrator | Sunday 23 March 2025 13:22:25 +0000 (0:00:01.198) 0:00:54.088 ********** 2025-03-23 13:36:28.681409 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681420 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.681431 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.681441 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.681452 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.681463 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.681474 | orchestrator | 2025-03-23 13:36:28.681485 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:36:28.681495 | orchestrator | Sunday 23 March 2025 13:22:27 +0000 (0:00:01.508) 0:00:55.597 ********** 2025-03-23 13:36:28.681506 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681517 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.681528 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.681543 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.681554 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.681565 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.681575 | orchestrator | 2025-03-23 13:36:28.681586 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-23 13:36:28.681597 | orchestrator | Sunday 23 March 2025 13:22:28 +0000 (0:00:01.357) 0:00:56.954 ********** 2025-03-23 13:36:28.681607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.681662 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-23 13:36:28.681687 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.681697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-23 13:36:28.681707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.681717 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-23 13:36:28.681727 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681737 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-23 13:36:28.681747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:36:28.681761 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-23 13:36:28.681771 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-23 13:36:28.681781 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.681790 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.681798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:36:28.681807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:36:28.681815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:36:28.681824 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.681832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:36:28.681841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:36:28.681849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:36:28.681863 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.681872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:36:28.681880 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:36:28.681889 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.681898 | orchestrator | 2025-03-23 13:36:28.681907 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-23 13:36:28.681916 | orchestrator | Sunday 23 March 2025 13:22:31 +0000 (0:00:02.989) 0:00:59.944 ********** 2025-03-23 13:36:28.681926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.681935 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-23 13:36:28.681944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.681953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-23 13:36:28.681962 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-23 13:36:28.681971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.681980 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.681990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:36:28.681999 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-23 13:36:28.682008 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-23 13:36:28.682048 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.682060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:36:28.682069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-23 13:36:28.682079 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.682089 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:36:28.682098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:36:28.682107 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.682116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:36:28.682126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:36:28.682135 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:36:28.682199 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.682212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:36:28.682222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:36:28.682232 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.682241 | orchestrator | 2025-03-23 13:36:28.682251 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-23 13:36:28.682259 | orchestrator | Sunday 23 March 2025 13:22:35 +0000 (0:00:03.574) 0:01:03.518 ********** 2025-03-23 13:36:28.682268 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.682277 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-03-23 13:36:28.682285 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-03-23 13:36:28.682294 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-03-23 13:36:28.682302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-23 13:36:28.682311 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-03-23 13:36:28.682320 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-03-23 13:36:28.682328 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-03-23 13:36:28.682337 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-03-23 13:36:28.682345 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-03-23 13:36:28.682354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-23 13:36:28.682362 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-03-23 13:36:28.682371 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-03-23 13:36:28.682379 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-03-23 13:36:28.682397 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-03-23 13:36:28.682406 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-03-23 13:36:28.682414 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-03-23 13:36:28.682423 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-03-23 13:36:28.682431 | orchestrator | 2025-03-23 13:36:28.682440 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-23 13:36:28.682449 | orchestrator | Sunday 23 March 2025 13:22:40 +0000 (0:00:05.321) 0:01:08.840 ********** 2025-03-23 13:36:28.682457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.682466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.682475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-23 13:36:28.682483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.682492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-23 13:36:28.682500 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.682509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-23 13:36:28.682518 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.682526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:36:28.682535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-23 13:36:28.682543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:36:28.682555 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-23 13:36:28.682564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-23 13:36:28.682573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:36:28.682581 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.682590 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:36:28.682598 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:36:28.682607 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.682628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:36:28.682638 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.682646 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:36:28.682655 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:36:28.682664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:36:28.682672 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.682681 | orchestrator | 2025-03-23 13:36:28.682689 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-23 13:36:28.682698 | orchestrator | Sunday 23 March 2025 13:22:41 +0000 (0:00:01.259) 0:01:10.100 ********** 2025-03-23 13:36:28.682707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.682719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.682728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.682737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-23 13:36:28.682745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-23 13:36:28.682754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-23 13:36:28.682763 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.682771 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-23 13:36:28.682780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-23 13:36:28.682789 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.682798 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-23 13:36:28.682806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:36:28.682815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:36:28.682828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:36:28.682837 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.682846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:36:28.682902 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:36:28.682915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:36:28.682924 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.682934 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.682943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:36:28.682953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:36:28.682962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:36:28.682972 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.682981 | orchestrator | 2025-03-23 13:36:28.682990 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-23 13:36:28.683000 | orchestrator | Sunday 23 March 2025 13:22:43 +0000 (0:00:01.151) 0:01:11.252 ********** 2025-03-23 13:36:28.683009 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-03-23 13:36:28.683018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:36:28.683028 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:36:28.683038 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:36:28.683047 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-03-23 13:36:28.683056 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:36:28.683065 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:36:28.683074 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:36:28.683084 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-03-23 13:36:28.683093 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:36:28.683107 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:36:28.683116 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:36:28.683125 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:36:28.683135 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683144 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:36:28.683153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:36:28.683163 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683172 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:36:28.683182 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:36:28.683191 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:36:28.683200 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683210 | orchestrator | 2025-03-23 13:36:28.683219 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-23 13:36:28.683228 | orchestrator | Sunday 23 March 2025 13:22:44 +0000 (0:00:01.839) 0:01:13.091 ********** 2025-03-23 13:36:28.683238 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.683247 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.683261 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.683271 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.683280 | orchestrator | 2025-03-23 13:36:28.683289 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.683299 | orchestrator | Sunday 23 March 2025 13:22:47 +0000 (0:00:02.282) 0:01:15.373 ********** 2025-03-23 13:36:28.683308 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683318 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683327 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683336 | orchestrator | 2025-03-23 13:36:28.683346 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.683355 | orchestrator | Sunday 23 March 2025 13:22:48 +0000 (0:00:00.941) 0:01:16.315 ********** 2025-03-23 13:36:28.683364 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683374 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683383 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683416 | orchestrator | 2025-03-23 13:36:28.683426 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.683435 | orchestrator | Sunday 23 March 2025 13:22:49 +0000 (0:00:01.350) 0:01:17.665 ********** 2025-03-23 13:36:28.683444 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683454 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683463 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683472 | orchestrator | 2025-03-23 13:36:28.683481 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.683491 | orchestrator | Sunday 23 March 2025 13:22:50 +0000 (0:00:00.991) 0:01:18.657 ********** 2025-03-23 13:36:28.683500 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.683509 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.683518 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.683528 | orchestrator | 2025-03-23 13:36:28.683537 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.683595 | orchestrator | Sunday 23 March 2025 13:22:53 +0000 (0:00:02.753) 0:01:21.410 ********** 2025-03-23 13:36:28.683607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.683628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.683637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.683646 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683654 | orchestrator | 2025-03-23 13:36:28.683663 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.683672 | orchestrator | Sunday 23 March 2025 13:22:54 +0000 (0:00:01.184) 0:01:22.594 ********** 2025-03-23 13:36:28.683680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.683689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.683697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.683706 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683715 | orchestrator | 2025-03-23 13:36:28.683723 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.683732 | orchestrator | Sunday 23 March 2025 13:22:56 +0000 (0:00:01.799) 0:01:24.394 ********** 2025-03-23 13:36:28.683740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.683749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.683757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.683766 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683779 | orchestrator | 2025-03-23 13:36:28.683788 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.683796 | orchestrator | Sunday 23 March 2025 13:22:57 +0000 (0:00:01.133) 0:01:25.527 ********** 2025-03-23 13:36:28.683810 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.683819 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.683828 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.683840 | orchestrator | 2025-03-23 13:36:28.683849 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.683857 | orchestrator | Sunday 23 March 2025 13:22:58 +0000 (0:00:00.941) 0:01:26.468 ********** 2025-03-23 13:36:28.683866 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-23 13:36:28.683874 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-23 13:36:28.683883 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-23 13:36:28.683891 | orchestrator | 2025-03-23 13:36:28.683900 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.683908 | orchestrator | Sunday 23 March 2025 13:23:00 +0000 (0:00:02.249) 0:01:28.718 ********** 2025-03-23 13:36:28.683917 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683925 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683934 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683942 | orchestrator | 2025-03-23 13:36:28.683951 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.683959 | orchestrator | Sunday 23 March 2025 13:23:01 +0000 (0:00:00.688) 0:01:29.406 ********** 2025-03-23 13:36:28.683968 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.683976 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.683985 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.683993 | orchestrator | 2025-03-23 13:36:28.684002 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.684011 | orchestrator | Sunday 23 March 2025 13:23:02 +0000 (0:00:00.905) 0:01:30.311 ********** 2025-03-23 13:36:28.684019 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.684028 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.684037 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.684045 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.684054 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.684062 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.684071 | orchestrator | 2025-03-23 13:36:28.684080 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.684088 | orchestrator | Sunday 23 March 2025 13:23:03 +0000 (0:00:01.045) 0:01:31.357 ********** 2025-03-23 13:36:28.684097 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.684105 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.684114 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.684123 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.684131 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.684140 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.684148 | orchestrator | 2025-03-23 13:36:28.684160 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.684169 | orchestrator | Sunday 23 March 2025 13:23:04 +0000 (0:00:01.154) 0:01:32.511 ********** 2025-03-23 13:36:28.684178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.684186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.684195 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.684203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.684212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.684220 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.684228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.684241 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.684250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.684258 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.684312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.684324 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.684333 | orchestrator | 2025-03-23 13:36:28.684341 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-23 13:36:28.684350 | orchestrator | Sunday 23 March 2025 13:23:05 +0000 (0:00:01.037) 0:01:33.549 ********** 2025-03-23 13:36:28.684358 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.684367 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.684375 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.684384 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.684392 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.684401 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.684409 | orchestrator | 2025-03-23 13:36:28.684418 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-23 13:36:28.684426 | orchestrator | Sunday 23 March 2025 13:23:06 +0000 (0:00:01.205) 0:01:34.754 ********** 2025-03-23 13:36:28.684435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.684443 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.684452 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.684460 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-23 13:36:28.684468 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:36:28.684477 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:36:28.684485 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:36:28.684494 | orchestrator | 2025-03-23 13:36:28.684502 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-23 13:36:28.684511 | orchestrator | Sunday 23 March 2025 13:23:07 +0000 (0:00:01.124) 0:01:35.879 ********** 2025-03-23 13:36:28.684519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.684528 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.684537 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.684545 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-23 13:36:28.684554 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:36:28.684562 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:36:28.684571 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:36:28.684579 | orchestrator | 2025-03-23 13:36:28.684588 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.684596 | orchestrator | Sunday 23 March 2025 13:23:10 +0000 (0:00:02.898) 0:01:38.777 ********** 2025-03-23 13:36:28.684605 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.684640 | orchestrator | 2025-03-23 13:36:28.684650 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.684658 | orchestrator | Sunday 23 March 2025 13:23:12 +0000 (0:00:02.194) 0:01:40.972 ********** 2025-03-23 13:36:28.684667 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.684675 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.684684 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.684693 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.684707 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.684716 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.684724 | orchestrator | 2025-03-23 13:36:28.684733 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.684742 | orchestrator | Sunday 23 March 2025 13:23:13 +0000 (0:00:00.906) 0:01:41.878 ********** 2025-03-23 13:36:28.684750 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.684759 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.684767 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.684775 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.684784 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.684792 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.684801 | orchestrator | 2025-03-23 13:36:28.684810 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.684818 | orchestrator | Sunday 23 March 2025 13:23:15 +0000 (0:00:01.653) 0:01:43.532 ********** 2025-03-23 13:36:28.684827 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.684835 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.684844 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.684852 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.684861 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.684869 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.684878 | orchestrator | 2025-03-23 13:36:28.684886 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.684895 | orchestrator | Sunday 23 March 2025 13:23:17 +0000 (0:00:01.800) 0:01:45.332 ********** 2025-03-23 13:36:28.684904 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.684912 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.684921 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.684929 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.684949 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.684959 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.684968 | orchestrator | 2025-03-23 13:36:28.684977 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.684987 | orchestrator | Sunday 23 March 2025 13:23:18 +0000 (0:00:01.405) 0:01:46.737 ********** 2025-03-23 13:36:28.684996 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685005 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.685068 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685086 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.685096 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685106 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.685116 | orchestrator | 2025-03-23 13:36:28.685126 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.685136 | orchestrator | Sunday 23 March 2025 13:23:19 +0000 (0:00:00.744) 0:01:47.481 ********** 2025-03-23 13:36:28.685146 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685156 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685165 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685175 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685185 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685195 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685205 | orchestrator | 2025-03-23 13:36:28.685215 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.685225 | orchestrator | Sunday 23 March 2025 13:23:20 +0000 (0:00:00.759) 0:01:48.241 ********** 2025-03-23 13:36:28.685235 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685244 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685254 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685264 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685274 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685284 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685294 | orchestrator | 2025-03-23 13:36:28.685304 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.685318 | orchestrator | Sunday 23 March 2025 13:23:20 +0000 (0:00:00.709) 0:01:48.950 ********** 2025-03-23 13:36:28.685328 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685337 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685345 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685354 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685363 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685372 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685381 | orchestrator | 2025-03-23 13:36:28.685390 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.685399 | orchestrator | Sunday 23 March 2025 13:23:21 +0000 (0:00:00.970) 0:01:49.921 ********** 2025-03-23 13:36:28.685408 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685417 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685426 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685435 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685444 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685453 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685462 | orchestrator | 2025-03-23 13:36:28.685471 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.685480 | orchestrator | Sunday 23 March 2025 13:23:22 +0000 (0:00:00.591) 0:01:50.512 ********** 2025-03-23 13:36:28.685489 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685498 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685507 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685516 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685525 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685534 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685543 | orchestrator | 2025-03-23 13:36:28.685552 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.685561 | orchestrator | Sunday 23 March 2025 13:23:23 +0000 (0:00:00.783) 0:01:51.296 ********** 2025-03-23 13:36:28.685570 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.685579 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.685588 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.685597 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.685606 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.685629 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.685638 | orchestrator | 2025-03-23 13:36:28.685646 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.685655 | orchestrator | Sunday 23 March 2025 13:23:24 +0000 (0:00:01.063) 0:01:52.359 ********** 2025-03-23 13:36:28.685663 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685672 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685680 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685688 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685697 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685705 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685714 | orchestrator | 2025-03-23 13:36:28.685722 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.685730 | orchestrator | Sunday 23 March 2025 13:23:24 +0000 (0:00:00.733) 0:01:53.093 ********** 2025-03-23 13:36:28.685739 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.685747 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.685756 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.685764 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.685773 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.685781 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.685790 | orchestrator | 2025-03-23 13:36:28.685798 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.685807 | orchestrator | Sunday 23 March 2025 13:23:25 +0000 (0:00:00.740) 0:01:53.833 ********** 2025-03-23 13:36:28.685815 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685835 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685844 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685853 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.685861 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.685870 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.685878 | orchestrator | 2025-03-23 13:36:28.685887 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.685895 | orchestrator | Sunday 23 March 2025 13:23:26 +0000 (0:00:00.793) 0:01:54.627 ********** 2025-03-23 13:36:28.685904 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.685912 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.685920 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.685929 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.685937 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.685945 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.685954 | orchestrator | 2025-03-23 13:36:28.685962 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.686036 | orchestrator | Sunday 23 March 2025 13:23:27 +0000 (0:00:00.647) 0:01:55.275 ********** 2025-03-23 13:36:28.686050 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686060 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686069 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686078 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.686087 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.686096 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.686105 | orchestrator | 2025-03-23 13:36:28.686114 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.686123 | orchestrator | Sunday 23 March 2025 13:23:27 +0000 (0:00:00.761) 0:01:56.036 ********** 2025-03-23 13:36:28.686132 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686141 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686150 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686159 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686168 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686177 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686186 | orchestrator | 2025-03-23 13:36:28.686195 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.686204 | orchestrator | Sunday 23 March 2025 13:23:28 +0000 (0:00:00.611) 0:01:56.647 ********** 2025-03-23 13:36:28.686213 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686222 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686231 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686240 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686249 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686258 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686267 | orchestrator | 2025-03-23 13:36:28.686276 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.686285 | orchestrator | Sunday 23 March 2025 13:23:29 +0000 (0:00:01.178) 0:01:57.825 ********** 2025-03-23 13:36:28.686294 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.686303 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.686312 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.686321 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686331 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686340 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686349 | orchestrator | 2025-03-23 13:36:28.686358 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.686367 | orchestrator | Sunday 23 March 2025 13:23:30 +0000 (0:00:00.762) 0:01:58.588 ********** 2025-03-23 13:36:28.686376 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.686385 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.686394 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.686403 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.686412 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.686421 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.686435 | orchestrator | 2025-03-23 13:36:28.686444 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.686457 | orchestrator | Sunday 23 March 2025 13:23:31 +0000 (0:00:00.855) 0:01:59.443 ********** 2025-03-23 13:36:28.686467 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686476 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686485 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686494 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686503 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686512 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686521 | orchestrator | 2025-03-23 13:36:28.686530 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.686539 | orchestrator | Sunday 23 March 2025 13:23:31 +0000 (0:00:00.648) 0:02:00.092 ********** 2025-03-23 13:36:28.686548 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686557 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686566 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686579 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686588 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686608 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686656 | orchestrator | 2025-03-23 13:36:28.686667 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.686677 | orchestrator | Sunday 23 March 2025 13:23:32 +0000 (0:00:00.852) 0:02:00.945 ********** 2025-03-23 13:36:28.686687 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686696 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686706 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686714 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686723 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686731 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686739 | orchestrator | 2025-03-23 13:36:28.686748 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.686756 | orchestrator | Sunday 23 March 2025 13:23:33 +0000 (0:00:00.639) 0:02:01.584 ********** 2025-03-23 13:36:28.686765 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686773 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686782 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686790 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686798 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686807 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686815 | orchestrator | 2025-03-23 13:36:28.686824 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.686832 | orchestrator | Sunday 23 March 2025 13:23:34 +0000 (0:00:00.789) 0:02:02.373 ********** 2025-03-23 13:36:28.686841 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686849 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686858 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686866 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686874 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686883 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686891 | orchestrator | 2025-03-23 13:36:28.686900 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.686908 | orchestrator | Sunday 23 March 2025 13:23:34 +0000 (0:00:00.589) 0:02:02.963 ********** 2025-03-23 13:36:28.686917 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.686925 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.686934 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.686942 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.686951 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.686959 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.686968 | orchestrator | 2025-03-23 13:36:28.687033 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.687053 | orchestrator | Sunday 23 March 2025 13:23:35 +0000 (0:00:00.780) 0:02:03.743 ********** 2025-03-23 13:36:28.687063 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687071 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687080 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687088 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687097 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687105 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687114 | orchestrator | 2025-03-23 13:36:28.687123 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.687132 | orchestrator | Sunday 23 March 2025 13:23:36 +0000 (0:00:00.778) 0:02:04.522 ********** 2025-03-23 13:36:28.687140 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687149 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687157 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687166 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687174 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687183 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687191 | orchestrator | 2025-03-23 13:36:28.687200 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.687208 | orchestrator | Sunday 23 March 2025 13:23:37 +0000 (0:00:00.806) 0:02:05.329 ********** 2025-03-23 13:36:28.687217 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687226 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687234 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687243 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687251 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687259 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687268 | orchestrator | 2025-03-23 13:36:28.687277 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.687285 | orchestrator | Sunday 23 March 2025 13:23:37 +0000 (0:00:00.669) 0:02:05.998 ********** 2025-03-23 13:36:28.687294 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687302 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687311 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687319 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687328 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687336 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687349 | orchestrator | 2025-03-23 13:36:28.687358 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.687366 | orchestrator | Sunday 23 March 2025 13:23:38 +0000 (0:00:01.014) 0:02:07.013 ********** 2025-03-23 13:36:28.687375 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687383 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687392 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687400 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687409 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687417 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687426 | orchestrator | 2025-03-23 13:36:28.687434 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.687443 | orchestrator | Sunday 23 March 2025 13:23:39 +0000 (0:00:00.887) 0:02:07.901 ********** 2025-03-23 13:36:28.687452 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687460 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687469 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687477 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687486 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687494 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687503 | orchestrator | 2025-03-23 13:36:28.687511 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.687520 | orchestrator | Sunday 23 March 2025 13:23:40 +0000 (0:00:00.965) 0:02:08.866 ********** 2025-03-23 13:36:28.687533 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.687542 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.687550 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687559 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.687567 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.687576 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687584 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.687593 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.687602 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.687610 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.687629 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687638 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.687646 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.687653 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687661 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687669 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.687681 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.687689 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687697 | orchestrator | 2025-03-23 13:36:28.687705 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.687713 | orchestrator | Sunday 23 March 2025 13:23:41 +0000 (0:00:00.837) 0:02:09.704 ********** 2025-03-23 13:36:28.687721 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-23 13:36:28.687728 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-23 13:36:28.687736 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687744 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-23 13:36:28.687752 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-23 13:36:28.687783 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687792 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-23 13:36:28.687800 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-23 13:36:28.687854 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687866 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-23 13:36:28.687874 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-23 13:36:28.687882 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687890 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-23 13:36:28.687898 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-23 13:36:28.687905 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.687913 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-23 13:36:28.687921 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-23 13:36:28.687929 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.687937 | orchestrator | 2025-03-23 13:36:28.687945 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.687953 | orchestrator | Sunday 23 March 2025 13:23:42 +0000 (0:00:01.117) 0:02:10.821 ********** 2025-03-23 13:36:28.687960 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.687968 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.687976 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.687984 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.687992 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688000 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688007 | orchestrator | 2025-03-23 13:36:28.688015 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.688023 | orchestrator | Sunday 23 March 2025 13:23:43 +0000 (0:00:00.815) 0:02:11.637 ********** 2025-03-23 13:36:28.688031 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688045 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688053 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688061 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688069 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688077 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688084 | orchestrator | 2025-03-23 13:36:28.688092 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.688101 | orchestrator | Sunday 23 March 2025 13:23:44 +0000 (0:00:01.157) 0:02:12.794 ********** 2025-03-23 13:36:28.688109 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688116 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688124 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688132 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688140 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688148 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688156 | orchestrator | 2025-03-23 13:36:28.688164 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.688171 | orchestrator | Sunday 23 March 2025 13:23:45 +0000 (0:00:00.716) 0:02:13.511 ********** 2025-03-23 13:36:28.688179 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688187 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688195 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688203 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688211 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688219 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688227 | orchestrator | 2025-03-23 13:36:28.688235 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.688243 | orchestrator | Sunday 23 March 2025 13:23:46 +0000 (0:00:00.940) 0:02:14.452 ********** 2025-03-23 13:36:28.688250 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688258 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688266 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688278 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688285 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688293 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688301 | orchestrator | 2025-03-23 13:36:28.688312 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.688321 | orchestrator | Sunday 23 March 2025 13:23:46 +0000 (0:00:00.670) 0:02:15.122 ********** 2025-03-23 13:36:28.688329 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688337 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688344 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688352 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688360 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688368 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688376 | orchestrator | 2025-03-23 13:36:28.688384 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.688392 | orchestrator | Sunday 23 March 2025 13:23:47 +0000 (0:00:00.964) 0:02:16.087 ********** 2025-03-23 13:36:28.688400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.688408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.688416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.688424 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688432 | orchestrator | 2025-03-23 13:36:28.688440 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.688448 | orchestrator | Sunday 23 March 2025 13:23:48 +0000 (0:00:00.462) 0:02:16.550 ********** 2025-03-23 13:36:28.688456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.688464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.688472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.688486 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688494 | orchestrator | 2025-03-23 13:36:28.688502 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.688510 | orchestrator | Sunday 23 March 2025 13:23:48 +0000 (0:00:00.435) 0:02:16.986 ********** 2025-03-23 13:36:28.688518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.688526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.688534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.688597 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688609 | orchestrator | 2025-03-23 13:36:28.688629 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.688637 | orchestrator | Sunday 23 March 2025 13:23:49 +0000 (0:00:00.465) 0:02:17.452 ********** 2025-03-23 13:36:28.688645 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688653 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688661 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688669 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688677 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688684 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688692 | orchestrator | 2025-03-23 13:36:28.688700 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.688708 | orchestrator | Sunday 23 March 2025 13:23:50 +0000 (0:00:00.896) 0:02:18.348 ********** 2025-03-23 13:36:28.688716 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.688724 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688732 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.688740 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688748 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.688756 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688764 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.688772 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688779 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.688787 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688795 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.688803 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688811 | orchestrator | 2025-03-23 13:36:28.688819 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.688827 | orchestrator | Sunday 23 March 2025 13:23:51 +0000 (0:00:00.968) 0:02:19.317 ********** 2025-03-23 13:36:28.688834 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688842 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688850 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688858 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688866 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688874 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688882 | orchestrator | 2025-03-23 13:36:28.688889 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.688897 | orchestrator | Sunday 23 March 2025 13:23:52 +0000 (0:00:00.989) 0:02:20.307 ********** 2025-03-23 13:36:28.688905 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.688913 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.688921 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.688929 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.688936 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.688944 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.688952 | orchestrator | 2025-03-23 13:36:28.688960 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.688975 | orchestrator | Sunday 23 March 2025 13:23:52 +0000 (0:00:00.732) 0:02:21.040 ********** 2025-03-23 13:36:28.688983 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.688997 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689005 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.689013 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689021 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.689029 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689037 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.689045 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689052 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.689060 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689068 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.689076 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689084 | orchestrator | 2025-03-23 13:36:28.689092 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.689100 | orchestrator | Sunday 23 March 2025 13:23:54 +0000 (0:00:01.140) 0:02:22.180 ********** 2025-03-23 13:36:28.689108 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689116 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689123 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689135 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.689143 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689151 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.689159 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689167 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.689176 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689183 | orchestrator | 2025-03-23 13:36:28.689191 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.689199 | orchestrator | Sunday 23 March 2025 13:23:55 +0000 (0:00:01.073) 0:02:23.254 ********** 2025-03-23 13:36:28.689207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.689215 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.689223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.689231 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689240 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:36:28.689249 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:36:28.689258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:36:28.689269 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:36:28.689323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:36:28.689335 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:36:28.689344 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.689362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.689370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.689379 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.689397 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.689405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.689414 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689427 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.689451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.689460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.689468 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689477 | orchestrator | 2025-03-23 13:36:28.689486 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.689495 | orchestrator | Sunday 23 March 2025 13:23:57 +0000 (0:00:02.551) 0:02:25.806 ********** 2025-03-23 13:36:28.689504 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689512 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689521 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689530 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689539 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689548 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689556 | orchestrator | 2025-03-23 13:36:28.689565 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.689573 | orchestrator | Sunday 23 March 2025 13:23:59 +0000 (0:00:01.617) 0:02:27.423 ********** 2025-03-23 13:36:28.689580 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689588 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689596 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689604 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.689612 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689631 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.689639 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689647 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.689654 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689662 | orchestrator | 2025-03-23 13:36:28.689670 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.689678 | orchestrator | Sunday 23 March 2025 13:24:01 +0000 (0:00:02.005) 0:02:29.428 ********** 2025-03-23 13:36:28.689686 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689694 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689702 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689710 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689717 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689725 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689733 | orchestrator | 2025-03-23 13:36:28.689741 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.689749 | orchestrator | Sunday 23 March 2025 13:24:02 +0000 (0:00:01.478) 0:02:30.906 ********** 2025-03-23 13:36:28.689757 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.689765 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.689773 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.689780 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.689788 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.689796 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.689804 | orchestrator | 2025-03-23 13:36:28.689812 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-03-23 13:36:28.689820 | orchestrator | Sunday 23 March 2025 13:24:04 +0000 (0:00:01.472) 0:02:32.378 ********** 2025-03-23 13:36:28.689828 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.689836 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.689843 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.689851 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.689859 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.689867 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.689875 | orchestrator | 2025-03-23 13:36:28.689886 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-03-23 13:36:28.689894 | orchestrator | Sunday 23 March 2025 13:24:06 +0000 (0:00:02.658) 0:02:35.037 ********** 2025-03-23 13:36:28.689902 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.689910 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.689925 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.689933 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.689941 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.689949 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.689957 | orchestrator | 2025-03-23 13:36:28.689964 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-03-23 13:36:28.689972 | orchestrator | Sunday 23 March 2025 13:24:10 +0000 (0:00:03.238) 0:02:38.275 ********** 2025-03-23 13:36:28.689981 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.689989 | orchestrator | 2025-03-23 13:36:28.689997 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-03-23 13:36:28.690005 | orchestrator | Sunday 23 March 2025 13:24:11 +0000 (0:00:01.474) 0:02:39.750 ********** 2025-03-23 13:36:28.690013 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.690059 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.690068 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.690076 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.690084 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.690092 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.690100 | orchestrator | 2025-03-23 13:36:28.690152 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-03-23 13:36:28.690163 | orchestrator | Sunday 23 March 2025 13:24:12 +0000 (0:00:01.058) 0:02:40.809 ********** 2025-03-23 13:36:28.690172 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.690179 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.690187 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.690195 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.690203 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.690211 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.690223 | orchestrator | 2025-03-23 13:36:28.690231 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-03-23 13:36:28.690239 | orchestrator | Sunday 23 March 2025 13:24:13 +0000 (0:00:00.609) 0:02:41.419 ********** 2025-03-23 13:36:28.690247 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690255 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690262 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690270 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690278 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690286 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690294 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690302 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690310 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690318 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-23 13:36:28.690325 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690333 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-23 13:36:28.690341 | orchestrator | 2025-03-23 13:36:28.690349 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-03-23 13:36:28.690357 | orchestrator | Sunday 23 March 2025 13:24:15 +0000 (0:00:02.035) 0:02:43.454 ********** 2025-03-23 13:36:28.690364 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.690372 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.690386 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.690394 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.690402 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.690410 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.690431 | orchestrator | 2025-03-23 13:36:28.690439 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-03-23 13:36:28.690448 | orchestrator | Sunday 23 March 2025 13:24:16 +0000 (0:00:01.166) 0:02:44.621 ********** 2025-03-23 13:36:28.690456 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.690465 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.690473 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.690481 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.690490 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.690498 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.690507 | orchestrator | 2025-03-23 13:36:28.690515 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-03-23 13:36:28.690524 | orchestrator | Sunday 23 March 2025 13:24:17 +0000 (0:00:01.113) 0:02:45.734 ********** 2025-03-23 13:36:28.690532 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.690541 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.690549 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.690558 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.690566 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.690574 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.690583 | orchestrator | 2025-03-23 13:36:28.690591 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-03-23 13:36:28.690600 | orchestrator | Sunday 23 March 2025 13:24:18 +0000 (0:00:00.852) 0:02:46.587 ********** 2025-03-23 13:36:28.690609 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.690653 | orchestrator | 2025-03-23 13:36:28.690661 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-03-23 13:36:28.690670 | orchestrator | Sunday 23 March 2025 13:24:20 +0000 (0:00:01.689) 0:02:48.277 ********** 2025-03-23 13:36:28.690678 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.690686 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.690694 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.690702 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.690709 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.690717 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.690725 | orchestrator | 2025-03-23 13:36:28.690737 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-03-23 13:36:28.690745 | orchestrator | Sunday 23 March 2025 13:25:01 +0000 (0:00:41.599) 0:03:29.877 ********** 2025-03-23 13:36:28.690753 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690761 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690768 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690775 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.690782 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690790 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690841 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690851 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.690859 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690867 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690874 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690882 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.690895 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690903 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690910 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690918 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.690925 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690933 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690941 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690948 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.690956 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-23 13:36:28.690964 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-23 13:36:28.690971 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-23 13:36:28.690979 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.690986 | orchestrator | 2025-03-23 13:36:28.690994 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-03-23 13:36:28.691002 | orchestrator | Sunday 23 March 2025 13:25:02 +0000 (0:00:01.040) 0:03:30.917 ********** 2025-03-23 13:36:28.691009 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691017 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691024 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691032 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691040 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691047 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691055 | orchestrator | 2025-03-23 13:36:28.691062 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-03-23 13:36:28.691070 | orchestrator | Sunday 23 March 2025 13:25:03 +0000 (0:00:00.746) 0:03:31.664 ********** 2025-03-23 13:36:28.691078 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691086 | orchestrator | 2025-03-23 13:36:28.691093 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-03-23 13:36:28.691101 | orchestrator | Sunday 23 March 2025 13:25:03 +0000 (0:00:00.172) 0:03:31.837 ********** 2025-03-23 13:36:28.691108 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691116 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691124 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691131 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691139 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691146 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691153 | orchestrator | 2025-03-23 13:36:28.691160 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-03-23 13:36:28.691167 | orchestrator | Sunday 23 March 2025 13:25:04 +0000 (0:00:01.166) 0:03:33.004 ********** 2025-03-23 13:36:28.691174 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691181 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691188 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691195 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691202 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691208 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691215 | orchestrator | 2025-03-23 13:36:28.691222 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-03-23 13:36:28.691229 | orchestrator | Sunday 23 March 2025 13:25:05 +0000 (0:00:00.985) 0:03:33.989 ********** 2025-03-23 13:36:28.691236 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691243 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691253 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691260 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691267 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691278 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691285 | orchestrator | 2025-03-23 13:36:28.691292 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-03-23 13:36:28.691301 | orchestrator | Sunday 23 March 2025 13:25:07 +0000 (0:00:01.413) 0:03:35.402 ********** 2025-03-23 13:36:28.691309 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.691316 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.691322 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.691329 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.691336 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.691343 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.691350 | orchestrator | 2025-03-23 13:36:28.691357 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-03-23 13:36:28.691364 | orchestrator | Sunday 23 March 2025 13:25:09 +0000 (0:00:02.125) 0:03:37.527 ********** 2025-03-23 13:36:28.691371 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.691378 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.691385 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.691391 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.691398 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.691405 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.691412 | orchestrator | 2025-03-23 13:36:28.691419 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-03-23 13:36:28.691426 | orchestrator | Sunday 23 March 2025 13:25:10 +0000 (0:00:01.385) 0:03:38.913 ********** 2025-03-23 13:36:28.691433 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.691441 | orchestrator | 2025-03-23 13:36:28.691486 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-03-23 13:36:28.691495 | orchestrator | Sunday 23 March 2025 13:25:12 +0000 (0:00:01.708) 0:03:40.621 ********** 2025-03-23 13:36:28.691503 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691509 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691516 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691523 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691530 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691537 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691544 | orchestrator | 2025-03-23 13:36:28.691551 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-03-23 13:36:28.691558 | orchestrator | Sunday 23 March 2025 13:25:13 +0000 (0:00:00.922) 0:03:41.543 ********** 2025-03-23 13:36:28.691564 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691571 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691578 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691585 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691592 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691599 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691605 | orchestrator | 2025-03-23 13:36:28.691623 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-03-23 13:36:28.691631 | orchestrator | Sunday 23 March 2025 13:25:14 +0000 (0:00:01.518) 0:03:43.062 ********** 2025-03-23 13:36:28.691638 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691645 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691652 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691659 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691666 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691672 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691679 | orchestrator | 2025-03-23 13:36:28.691686 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-03-23 13:36:28.691693 | orchestrator | Sunday 23 March 2025 13:25:16 +0000 (0:00:01.317) 0:03:44.380 ********** 2025-03-23 13:36:28.691700 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691707 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691721 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691728 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691734 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691741 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691748 | orchestrator | 2025-03-23 13:36:28.691755 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-03-23 13:36:28.691762 | orchestrator | Sunday 23 March 2025 13:25:17 +0000 (0:00:00.895) 0:03:45.275 ********** 2025-03-23 13:36:28.691769 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691776 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691782 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691789 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691796 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691813 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691820 | orchestrator | 2025-03-23 13:36:28.691827 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-03-23 13:36:28.691834 | orchestrator | Sunday 23 March 2025 13:25:18 +0000 (0:00:01.006) 0:03:46.282 ********** 2025-03-23 13:36:28.691841 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691848 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691855 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691862 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691869 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691879 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691886 | orchestrator | 2025-03-23 13:36:28.691893 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-03-23 13:36:28.691900 | orchestrator | Sunday 23 March 2025 13:25:18 +0000 (0:00:00.843) 0:03:47.126 ********** 2025-03-23 13:36:28.691907 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.691913 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.691920 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.691927 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.691934 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.691940 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.691947 | orchestrator | 2025-03-23 13:36:28.691954 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-03-23 13:36:28.691961 | orchestrator | Sunday 23 March 2025 13:25:20 +0000 (0:00:01.384) 0:03:48.511 ********** 2025-03-23 13:36:28.691968 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.691975 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.691982 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.691989 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.691996 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.692002 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.692009 | orchestrator | 2025-03-23 13:36:28.692016 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.692023 | orchestrator | Sunday 23 March 2025 13:25:21 +0000 (0:00:01.598) 0:03:50.109 ********** 2025-03-23 13:36:28.692030 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.692037 | orchestrator | 2025-03-23 13:36:28.692044 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-03-23 13:36:28.692051 | orchestrator | Sunday 23 March 2025 13:25:23 +0000 (0:00:01.512) 0:03:51.621 ********** 2025-03-23 13:36:28.692058 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-03-23 13:36:28.692065 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-03-23 13:36:28.692071 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-03-23 13:36:28.692078 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692085 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-03-23 13:36:28.692092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692157 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-03-23 13:36:28.692167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692182 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-03-23 13:36:28.692190 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692205 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692213 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692220 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-03-23 13:36:28.692235 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692250 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692258 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692265 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692273 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-03-23 13:36:28.692280 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692288 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692304 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-03-23 13:36:28.692318 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692341 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692348 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692356 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-03-23 13:36:28.692364 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692382 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692397 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-03-23 13:36:28.692412 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692420 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692427 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692435 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692450 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-03-23 13:36:28.692458 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692465 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692489 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692510 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-23 13:36:28.692517 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692524 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692531 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692551 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692558 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-23 13:36:28.692565 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692572 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692594 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-23 13:36:28.692601 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692608 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692628 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692675 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692693 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692701 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-23 13:36:28.692708 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692722 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-03-23 13:36:28.692730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692737 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692744 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-23 13:36:28.692752 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692759 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-03-23 13:36:28.692766 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-03-23 13:36:28.692774 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-03-23 13:36:28.692781 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-23 13:36:28.692788 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-03-23 13:36:28.692796 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-03-23 13:36:28.692803 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-03-23 13:36:28.692810 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-03-23 13:36:28.692817 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-03-23 13:36:28.692825 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-03-23 13:36:28.692832 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-03-23 13:36:28.692839 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-03-23 13:36:28.692851 | orchestrator | 2025-03-23 13:36:28.692859 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.692869 | orchestrator | Sunday 23 March 2025 13:25:30 +0000 (0:00:06.883) 0:03:58.505 ********** 2025-03-23 13:36:28.692877 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.692884 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.692892 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.692900 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.692908 | orchestrator | 2025-03-23 13:36:28.692915 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-03-23 13:36:28.692922 | orchestrator | Sunday 23 March 2025 13:25:31 +0000 (0:00:01.422) 0:03:59.928 ********** 2025-03-23 13:36:28.692930 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692938 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692945 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692953 | orchestrator | 2025-03-23 13:36:28.692960 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-03-23 13:36:28.692967 | orchestrator | Sunday 23 March 2025 13:25:32 +0000 (0:00:01.112) 0:04:01.041 ********** 2025-03-23 13:36:28.692975 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692982 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692990 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.692997 | orchestrator | 2025-03-23 13:36:28.693005 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.693012 | orchestrator | Sunday 23 March 2025 13:25:34 +0000 (0:00:01.277) 0:04:02.318 ********** 2025-03-23 13:36:28.693019 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693027 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693034 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693042 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.693049 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.693057 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.693064 | orchestrator | 2025-03-23 13:36:28.693072 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.693079 | orchestrator | Sunday 23 March 2025 13:25:35 +0000 (0:00:00.851) 0:04:03.170 ********** 2025-03-23 13:36:28.693086 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693094 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693102 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693109 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.693116 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.693124 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.693131 | orchestrator | 2025-03-23 13:36:28.693139 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.693146 | orchestrator | Sunday 23 March 2025 13:25:35 +0000 (0:00:00.691) 0:04:03.862 ********** 2025-03-23 13:36:28.693154 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693196 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693207 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693215 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693222 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693230 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693242 | orchestrator | 2025-03-23 13:36:28.693250 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.693257 | orchestrator | Sunday 23 March 2025 13:25:36 +0000 (0:00:01.153) 0:04:05.015 ********** 2025-03-23 13:36:28.693265 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693273 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693280 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693288 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693295 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693303 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693310 | orchestrator | 2025-03-23 13:36:28.693318 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.693325 | orchestrator | Sunday 23 March 2025 13:25:38 +0000 (0:00:01.173) 0:04:06.189 ********** 2025-03-23 13:36:28.693333 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693341 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693348 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693356 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693363 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693371 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693379 | orchestrator | 2025-03-23 13:36:28.693387 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.693394 | orchestrator | Sunday 23 March 2025 13:25:39 +0000 (0:00:01.261) 0:04:07.451 ********** 2025-03-23 13:36:28.693402 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693412 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693420 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693428 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693435 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693443 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693454 | orchestrator | 2025-03-23 13:36:28.693462 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.693470 | orchestrator | Sunday 23 March 2025 13:25:40 +0000 (0:00:00.931) 0:04:08.383 ********** 2025-03-23 13:36:28.693478 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693486 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693494 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693501 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693509 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693516 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693524 | orchestrator | 2025-03-23 13:36:28.693531 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.693539 | orchestrator | Sunday 23 March 2025 13:25:41 +0000 (0:00:00.974) 0:04:09.358 ********** 2025-03-23 13:36:28.693547 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693554 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693562 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693569 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693577 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693584 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693592 | orchestrator | 2025-03-23 13:36:28.693600 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.693607 | orchestrator | Sunday 23 March 2025 13:25:41 +0000 (0:00:00.656) 0:04:10.014 ********** 2025-03-23 13:36:28.693627 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693634 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693641 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693648 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.693655 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.693662 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.693669 | orchestrator | 2025-03-23 13:36:28.693676 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.693688 | orchestrator | Sunday 23 March 2025 13:25:44 +0000 (0:00:02.588) 0:04:12.603 ********** 2025-03-23 13:36:28.693695 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693702 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693709 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693716 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.693734 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.693741 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.693748 | orchestrator | 2025-03-23 13:36:28.693755 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.693762 | orchestrator | Sunday 23 March 2025 13:25:45 +0000 (0:00:00.631) 0:04:13.234 ********** 2025-03-23 13:36:28.693769 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.693776 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.693783 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693790 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.693800 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.693808 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.693815 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.693822 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.693829 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.693836 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.693843 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.693850 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.693857 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.693864 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.693871 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.693878 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.693885 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.693892 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.693899 | orchestrator | 2025-03-23 13:36:28.693906 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.693955 | orchestrator | Sunday 23 March 2025 13:25:45 +0000 (0:00:00.882) 0:04:14.117 ********** 2025-03-23 13:36:28.693966 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-23 13:36:28.693977 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-23 13:36:28.693984 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.693992 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-23 13:36:28.693999 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-23 13:36:28.694007 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694031 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-23 13:36:28.694041 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-23 13:36:28.694048 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694056 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-03-23 13:36:28.694063 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-03-23 13:36:28.694071 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-03-23 13:36:28.694078 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-03-23 13:36:28.694086 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-03-23 13:36:28.694093 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-03-23 13:36:28.694101 | orchestrator | 2025-03-23 13:36:28.694108 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.694115 | orchestrator | Sunday 23 March 2025 13:25:46 +0000 (0:00:00.820) 0:04:14.938 ********** 2025-03-23 13:36:28.694123 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694130 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694137 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694150 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.694157 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.694165 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.694172 | orchestrator | 2025-03-23 13:36:28.694180 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.694187 | orchestrator | Sunday 23 March 2025 13:25:48 +0000 (0:00:01.300) 0:04:16.239 ********** 2025-03-23 13:36:28.694195 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694202 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694209 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694217 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.694224 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.694231 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.694239 | orchestrator | 2025-03-23 13:36:28.694246 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.694254 | orchestrator | Sunday 23 March 2025 13:25:48 +0000 (0:00:00.806) 0:04:17.046 ********** 2025-03-23 13:36:28.694261 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694269 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694276 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694283 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.694291 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.694301 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.694309 | orchestrator | 2025-03-23 13:36:28.694316 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.694323 | orchestrator | Sunday 23 March 2025 13:25:50 +0000 (0:00:01.259) 0:04:18.305 ********** 2025-03-23 13:36:28.694331 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694338 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694346 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694353 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.694360 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.694368 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.694375 | orchestrator | 2025-03-23 13:36:28.694387 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.694395 | orchestrator | Sunday 23 March 2025 13:25:50 +0000 (0:00:00.757) 0:04:19.063 ********** 2025-03-23 13:36:28.694402 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694410 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694417 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694424 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.694432 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.694439 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.694446 | orchestrator | 2025-03-23 13:36:28.694454 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.694461 | orchestrator | Sunday 23 March 2025 13:25:52 +0000 (0:00:01.213) 0:04:20.276 ********** 2025-03-23 13:36:28.694469 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694476 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694484 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694491 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.694499 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.694506 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.694514 | orchestrator | 2025-03-23 13:36:28.694521 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.694529 | orchestrator | Sunday 23 March 2025 13:25:53 +0000 (0:00:01.000) 0:04:21.277 ********** 2025-03-23 13:36:28.694536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.694543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.694551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.694558 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694570 | orchestrator | 2025-03-23 13:36:28.694578 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.694585 | orchestrator | Sunday 23 March 2025 13:25:53 +0000 (0:00:00.726) 0:04:22.004 ********** 2025-03-23 13:36:28.694593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.694600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.694607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.694646 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694655 | orchestrator | 2025-03-23 13:36:28.694705 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.694716 | orchestrator | Sunday 23 March 2025 13:25:54 +0000 (0:00:00.445) 0:04:22.450 ********** 2025-03-23 13:36:28.694723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.694731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.694739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.694746 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694754 | orchestrator | 2025-03-23 13:36:28.694761 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.694769 | orchestrator | Sunday 23 March 2025 13:25:54 +0000 (0:00:00.421) 0:04:22.871 ********** 2025-03-23 13:36:28.694776 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694784 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694791 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694799 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.694806 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.694814 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.694821 | orchestrator | 2025-03-23 13:36:28.694829 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.694836 | orchestrator | Sunday 23 March 2025 13:25:55 +0000 (0:00:00.660) 0:04:23.531 ********** 2025-03-23 13:36:28.694844 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.694851 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694859 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.694866 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694874 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.694882 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694889 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-23 13:36:28.694897 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-23 13:36:28.694904 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-23 13:36:28.694911 | orchestrator | 2025-03-23 13:36:28.694919 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.694927 | orchestrator | Sunday 23 March 2025 13:25:57 +0000 (0:00:01.805) 0:04:25.337 ********** 2025-03-23 13:36:28.694934 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.694942 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.694949 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.694957 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.694964 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.694971 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.694979 | orchestrator | 2025-03-23 13:36:28.694987 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.694994 | orchestrator | Sunday 23 March 2025 13:25:57 +0000 (0:00:00.728) 0:04:26.066 ********** 2025-03-23 13:36:28.695002 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695009 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.695016 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.695024 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695031 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.695039 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.695046 | orchestrator | 2025-03-23 13:36:28.695059 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.695067 | orchestrator | Sunday 23 March 2025 13:25:58 +0000 (0:00:01.043) 0:04:27.110 ********** 2025-03-23 13:36:28.695074 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.695082 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695089 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.695097 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.695104 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.695112 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.695119 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.695127 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695134 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.695142 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.695153 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.695160 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.695168 | orchestrator | 2025-03-23 13:36:28.695175 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.695182 | orchestrator | Sunday 23 March 2025 13:26:00 +0000 (0:00:01.428) 0:04:28.538 ********** 2025-03-23 13:36:28.695188 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695195 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.695202 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.695208 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.695215 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695222 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.695228 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.695235 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.695242 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.695248 | orchestrator | 2025-03-23 13:36:28.695255 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.695262 | orchestrator | Sunday 23 March 2025 13:26:01 +0000 (0:00:01.077) 0:04:29.616 ********** 2025-03-23 13:36:28.695269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.695276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.695282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.695289 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695296 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:36:28.695333 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:36:28.695352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:36:28.695359 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.695365 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:36:28.695372 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:36:28.695379 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:36:28.695385 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.695392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.695399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.695405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.695412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.695418 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.695436 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.695443 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.695450 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.695456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.695463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.695470 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.695476 | orchestrator | 2025-03-23 13:36:28.695483 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.695489 | orchestrator | Sunday 23 March 2025 13:26:03 +0000 (0:00:02.163) 0:04:31.780 ********** 2025-03-23 13:36:28.695496 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.695503 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.695509 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.695516 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.695522 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.695529 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.695536 | orchestrator | 2025-03-23 13:36:28.695542 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.695549 | orchestrator | Sunday 23 March 2025 13:26:10 +0000 (0:00:07.184) 0:04:38.964 ********** 2025-03-23 13:36:28.695556 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.695562 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.695569 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.695575 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.695582 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.695588 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.695595 | orchestrator | 2025-03-23 13:36:28.695602 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-03-23 13:36:28.695608 | orchestrator | Sunday 23 March 2025 13:26:12 +0000 (0:00:01.623) 0:04:40.588 ********** 2025-03-23 13:36:28.695625 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695631 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.695637 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.695644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.695650 | orchestrator | 2025-03-23 13:36:28.695656 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-03-23 13:36:28.695663 | orchestrator | Sunday 23 March 2025 13:26:13 +0000 (0:00:01.082) 0:04:41.670 ********** 2025-03-23 13:36:28.695669 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.695675 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.695681 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.695687 | orchestrator | 2025-03-23 13:36:28.695697 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-03-23 13:36:28.695703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.695709 | orchestrator | 2025-03-23 13:36:28.695716 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-23 13:36:28.695722 | orchestrator | Sunday 23 March 2025 13:26:14 +0000 (0:00:01.270) 0:04:42.940 ********** 2025-03-23 13:36:28.695728 | orchestrator | 2025-03-23 13:36:28.695734 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-03-23 13:36:28.695740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.695746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.695752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.695759 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695765 | orchestrator | 2025-03-23 13:36:28.695771 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-23 13:36:28.695777 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.695783 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.695793 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.695799 | orchestrator | 2025-03-23 13:36:28.695805 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-03-23 13:36:28.695811 | orchestrator | Sunday 23 March 2025 13:26:15 +0000 (0:00:01.193) 0:04:44.133 ********** 2025-03-23 13:36:28.695817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.695826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.695832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.695839 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695845 | orchestrator | 2025-03-23 13:36:28.695851 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-03-23 13:36:28.695857 | orchestrator | Sunday 23 March 2025 13:26:17 +0000 (0:00:01.033) 0:04:45.166 ********** 2025-03-23 13:36:28.695863 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.695869 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.695875 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.695881 | orchestrator | 2025-03-23 13:36:28.695888 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-03-23 13:36:28.695928 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695938 | orchestrator | 2025-03-23 13:36:28.695945 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-03-23 13:36:28.695951 | orchestrator | Sunday 23 March 2025 13:26:17 +0000 (0:00:00.600) 0:04:45.766 ********** 2025-03-23 13:36:28.695958 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.695965 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.695971 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.695978 | orchestrator | 2025-03-23 13:36:28.695984 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-03-23 13:36:28.695991 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.695997 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.696004 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.696010 | orchestrator | 2025-03-23 13:36:28.696017 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-23 13:36:28.696023 | orchestrator | Sunday 23 March 2025 13:26:18 +0000 (0:00:00.625) 0:04:46.392 ********** 2025-03-23 13:36:28.696030 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.696036 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.696043 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.696049 | orchestrator | 2025-03-23 13:36:28.696056 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-03-23 13:36:28.696063 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696069 | orchestrator | 2025-03-23 13:36:28.696076 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-23 13:36:28.696082 | orchestrator | Sunday 23 March 2025 13:26:18 +0000 (0:00:00.701) 0:04:47.093 ********** 2025-03-23 13:36:28.696089 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.696099 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.696105 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.696112 | orchestrator | 2025-03-23 13:36:28.696119 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-03-23 13:36:28.696125 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696132 | orchestrator | 2025-03-23 13:36:28.696138 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-03-23 13:36:28.696145 | orchestrator | Sunday 23 March 2025 13:26:19 +0000 (0:00:00.805) 0:04:47.899 ********** 2025-03-23 13:36:28.696151 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696158 | orchestrator | 2025-03-23 13:36:28.696164 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-03-23 13:36:28.696171 | orchestrator | Sunday 23 March 2025 13:26:19 +0000 (0:00:00.126) 0:04:48.025 ********** 2025-03-23 13:36:28.696177 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.696184 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.696194 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.696201 | orchestrator | 2025-03-23 13:36:28.696208 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-03-23 13:36:28.696214 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696221 | orchestrator | 2025-03-23 13:36:28.696227 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-23 13:36:28.696234 | orchestrator | Sunday 23 March 2025 13:26:20 +0000 (0:00:00.786) 0:04:48.812 ********** 2025-03-23 13:36:28.696241 | orchestrator | 2025-03-23 13:36:28.696247 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-03-23 13:36:28.696254 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.696267 | orchestrator | 2025-03-23 13:36:28.696274 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-03-23 13:36:28.696281 | orchestrator | Sunday 23 March 2025 13:26:21 +0000 (0:00:00.710) 0:04:49.522 ********** 2025-03-23 13:36:28.696287 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.696294 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.696300 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.696307 | orchestrator | 2025-03-23 13:36:28.696314 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-03-23 13:36:28.696320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.696327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.696333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.696340 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696346 | orchestrator | 2025-03-23 13:36:28.696353 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-23 13:36:28.696362 | orchestrator | Sunday 23 March 2025 13:26:22 +0000 (0:00:00.920) 0:04:50.442 ********** 2025-03-23 13:36:28.696369 | orchestrator | 2025-03-23 13:36:28.696375 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-03-23 13:36:28.696382 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696388 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.696395 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.696402 | orchestrator | 2025-03-23 13:36:28.696408 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-23 13:36:28.696415 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.696421 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.696428 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.696434 | orchestrator | 2025-03-23 13:36:28.696441 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-03-23 13:36:28.696447 | orchestrator | Sunday 23 March 2025 13:26:23 +0000 (0:00:01.459) 0:04:51.901 ********** 2025-03-23 13:36:28.696454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.696461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.696467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.696473 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.696488 | orchestrator | 2025-03-23 13:36:28.696495 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-03-23 13:36:28.696501 | orchestrator | Sunday 23 March 2025 13:26:25 +0000 (0:00:01.267) 0:04:53.169 ********** 2025-03-23 13:36:28.696508 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.696514 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.696521 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.696528 | orchestrator | 2025-03-23 13:36:28.696567 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-03-23 13:36:28.696576 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696583 | orchestrator | 2025-03-23 13:36:28.696590 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-23 13:36:28.696601 | orchestrator | Sunday 23 March 2025 13:26:26 +0000 (0:00:01.691) 0:04:54.861 ********** 2025-03-23 13:36:28.696607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.696624 | orchestrator | 2025-03-23 13:36:28.696631 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-03-23 13:36:28.696637 | orchestrator | Sunday 23 March 2025 13:26:27 +0000 (0:00:00.915) 0:04:55.776 ********** 2025-03-23 13:36:28.696643 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.696649 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.696655 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.696662 | orchestrator | 2025-03-23 13:36:28.696668 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-03-23 13:36:28.696674 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.696680 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.696686 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.696693 | orchestrator | 2025-03-23 13:36:28.696699 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-03-23 13:36:28.696705 | orchestrator | Sunday 23 March 2025 13:26:28 +0000 (0:00:01.001) 0:04:56.778 ********** 2025-03-23 13:36:28.696711 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.696717 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.696723 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.696730 | orchestrator | 2025-03-23 13:36:28.696736 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.696742 | orchestrator | Sunday 23 March 2025 13:26:30 +0000 (0:00:01.962) 0:04:58.740 ********** 2025-03-23 13:36:28.696748 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.696754 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.696760 | orchestrator | 2025-03-23 13:36:28.696766 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-03-23 13:36:28.696773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.696779 | orchestrator | 2025-03-23 13:36:28.696785 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.696791 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.696797 | orchestrator | 2025-03-23 13:36:28.696803 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-03-23 13:36:28.696809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.696815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.696822 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.696828 | orchestrator | 2025-03-23 13:36:28.696834 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-03-23 13:36:28.696840 | orchestrator | Sunday 23 March 2025 13:26:32 +0000 (0:00:01.470) 0:05:00.211 ********** 2025-03-23 13:36:28.696846 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.696853 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.696859 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.696865 | orchestrator | 2025-03-23 13:36:28.696871 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-23 13:36:28.696877 | orchestrator | Sunday 23 March 2025 13:26:33 +0000 (0:00:01.611) 0:05:01.823 ********** 2025-03-23 13:36:28.696883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.696890 | orchestrator | 2025-03-23 13:36:28.696896 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-03-23 13:36:28.696902 | orchestrator | Sunday 23 March 2025 13:26:34 +0000 (0:00:00.698) 0:05:02.521 ********** 2025-03-23 13:36:28.696908 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.696914 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.696920 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.696926 | orchestrator | 2025-03-23 13:36:28.696933 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-03-23 13:36:28.696944 | orchestrator | Sunday 23 March 2025 13:26:35 +0000 (0:00:00.684) 0:05:03.205 ********** 2025-03-23 13:36:28.696950 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.696956 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.696962 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.696969 | orchestrator | 2025-03-23 13:36:28.696975 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-03-23 13:36:28.696981 | orchestrator | Sunday 23 March 2025 13:26:36 +0000 (0:00:01.465) 0:05:04.671 ********** 2025-03-23 13:36:28.696987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.696993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.696999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.697005 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.697012 | orchestrator | 2025-03-23 13:36:28.697018 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-03-23 13:36:28.697024 | orchestrator | Sunday 23 March 2025 13:26:37 +0000 (0:00:01.324) 0:05:05.995 ********** 2025-03-23 13:36:28.697030 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.697036 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.697042 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.697048 | orchestrator | 2025-03-23 13:36:28.697058 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-03-23 13:36:28.697064 | orchestrator | Sunday 23 March 2025 13:26:38 +0000 (0:00:00.395) 0:05:06.390 ********** 2025-03-23 13:36:28.697070 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.697076 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.697085 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.697092 | orchestrator | 2025-03-23 13:36:28.697098 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-23 13:36:28.697104 | orchestrator | Sunday 23 March 2025 13:26:39 +0000 (0:00:01.059) 0:05:07.450 ********** 2025-03-23 13:36:28.697111 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.697117 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.697156 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.697165 | orchestrator | 2025-03-23 13:36:28.697173 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-03-23 13:36:28.697179 | orchestrator | Sunday 23 March 2025 13:26:39 +0000 (0:00:00.597) 0:05:08.048 ********** 2025-03-23 13:36:28.697186 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.697193 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.697199 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.697206 | orchestrator | 2025-03-23 13:36:28.697212 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.697219 | orchestrator | Sunday 23 March 2025 13:26:40 +0000 (0:00:00.721) 0:05:08.769 ********** 2025-03-23 13:36:28.697226 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.697232 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.697239 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.697246 | orchestrator | 2025-03-23 13:36:28.697252 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-03-23 13:36:28.697259 | orchestrator | 2025-03-23 13:36:28.697265 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.697272 | orchestrator | Sunday 23 March 2025 13:26:43 +0000 (0:00:03.278) 0:05:12.048 ********** 2025-03-23 13:36:28.697279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.697286 | orchestrator | 2025-03-23 13:36:28.697292 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.697299 | orchestrator | Sunday 23 March 2025 13:26:44 +0000 (0:00:00.884) 0:05:12.932 ********** 2025-03-23 13:36:28.697305 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.697312 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.697325 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.697332 | orchestrator | 2025-03-23 13:36:28.697339 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.697345 | orchestrator | Sunday 23 March 2025 13:26:45 +0000 (0:00:00.849) 0:05:13.782 ********** 2025-03-23 13:36:28.697352 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697358 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697365 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697371 | orchestrator | 2025-03-23 13:36:28.697378 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.697385 | orchestrator | Sunday 23 March 2025 13:26:46 +0000 (0:00:00.637) 0:05:14.420 ********** 2025-03-23 13:36:28.697391 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697398 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697404 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697411 | orchestrator | 2025-03-23 13:36:28.697418 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.697424 | orchestrator | Sunday 23 March 2025 13:26:46 +0000 (0:00:00.482) 0:05:14.902 ********** 2025-03-23 13:36:28.697431 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697437 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697444 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697450 | orchestrator | 2025-03-23 13:36:28.697457 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.697464 | orchestrator | Sunday 23 March 2025 13:26:47 +0000 (0:00:00.444) 0:05:15.346 ********** 2025-03-23 13:36:28.697470 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.697477 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.697483 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.697490 | orchestrator | 2025-03-23 13:36:28.697497 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.697503 | orchestrator | Sunday 23 March 2025 13:26:48 +0000 (0:00:00.972) 0:05:16.318 ********** 2025-03-23 13:36:28.697510 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697516 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697523 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697529 | orchestrator | 2025-03-23 13:36:28.697536 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.697542 | orchestrator | Sunday 23 March 2025 13:26:48 +0000 (0:00:00.437) 0:05:16.756 ********** 2025-03-23 13:36:28.697549 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697559 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697566 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697573 | orchestrator | 2025-03-23 13:36:28.697580 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.697586 | orchestrator | Sunday 23 March 2025 13:26:48 +0000 (0:00:00.330) 0:05:17.087 ********** 2025-03-23 13:36:28.697593 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697600 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697606 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697622 | orchestrator | 2025-03-23 13:36:28.697629 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.697635 | orchestrator | Sunday 23 March 2025 13:26:49 +0000 (0:00:00.483) 0:05:17.571 ********** 2025-03-23 13:36:28.697641 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697647 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697653 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697660 | orchestrator | 2025-03-23 13:36:28.697666 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.697672 | orchestrator | Sunday 23 March 2025 13:26:50 +0000 (0:00:00.661) 0:05:18.232 ********** 2025-03-23 13:36:28.697678 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697684 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697690 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697700 | orchestrator | 2025-03-23 13:36:28.697706 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.697714 | orchestrator | Sunday 23 March 2025 13:26:50 +0000 (0:00:00.420) 0:05:18.653 ********** 2025-03-23 13:36:28.697721 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.697727 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.697733 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.697739 | orchestrator | 2025-03-23 13:36:28.697746 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.697795 | orchestrator | Sunday 23 March 2025 13:26:51 +0000 (0:00:00.719) 0:05:19.373 ********** 2025-03-23 13:36:28.697805 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697811 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697817 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697823 | orchestrator | 2025-03-23 13:36:28.697829 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.697835 | orchestrator | Sunday 23 March 2025 13:26:51 +0000 (0:00:00.361) 0:05:19.734 ********** 2025-03-23 13:36:28.697841 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.697848 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.697854 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.697860 | orchestrator | 2025-03-23 13:36:28.697866 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.697872 | orchestrator | Sunday 23 March 2025 13:26:52 +0000 (0:00:00.783) 0:05:20.517 ********** 2025-03-23 13:36:28.697878 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697888 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697894 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697900 | orchestrator | 2025-03-23 13:36:28.697906 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.697912 | orchestrator | Sunday 23 March 2025 13:26:52 +0000 (0:00:00.472) 0:05:20.990 ********** 2025-03-23 13:36:28.697918 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697924 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697930 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697937 | orchestrator | 2025-03-23 13:36:28.697943 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.697949 | orchestrator | Sunday 23 March 2025 13:26:53 +0000 (0:00:00.384) 0:05:21.375 ********** 2025-03-23 13:36:28.697955 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697961 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.697967 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.697973 | orchestrator | 2025-03-23 13:36:28.697979 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.697985 | orchestrator | Sunday 23 March 2025 13:26:53 +0000 (0:00:00.388) 0:05:21.763 ********** 2025-03-23 13:36:28.697991 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.697998 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698004 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698010 | orchestrator | 2025-03-23 13:36:28.698038 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.698045 | orchestrator | Sunday 23 March 2025 13:26:54 +0000 (0:00:00.668) 0:05:22.432 ********** 2025-03-23 13:36:28.698051 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698058 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698064 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698070 | orchestrator | 2025-03-23 13:36:28.698076 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.698082 | orchestrator | Sunday 23 March 2025 13:26:54 +0000 (0:00:00.428) 0:05:22.861 ********** 2025-03-23 13:36:28.698088 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.698094 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.698100 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.698106 | orchestrator | 2025-03-23 13:36:28.698112 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.698122 | orchestrator | Sunday 23 March 2025 13:26:55 +0000 (0:00:00.458) 0:05:23.319 ********** 2025-03-23 13:36:28.698128 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.698135 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.698141 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.698147 | orchestrator | 2025-03-23 13:36:28.698153 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.698159 | orchestrator | Sunday 23 March 2025 13:26:55 +0000 (0:00:00.430) 0:05:23.749 ********** 2025-03-23 13:36:28.698165 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698171 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698177 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698183 | orchestrator | 2025-03-23 13:36:28.698189 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.698196 | orchestrator | Sunday 23 March 2025 13:26:56 +0000 (0:00:00.695) 0:05:24.445 ********** 2025-03-23 13:36:28.698202 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698208 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698214 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698220 | orchestrator | 2025-03-23 13:36:28.698226 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.698232 | orchestrator | Sunday 23 March 2025 13:26:56 +0000 (0:00:00.386) 0:05:24.832 ********** 2025-03-23 13:36:28.698238 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698244 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698250 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698257 | orchestrator | 2025-03-23 13:36:28.698263 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.698269 | orchestrator | Sunday 23 March 2025 13:26:57 +0000 (0:00:00.455) 0:05:25.287 ********** 2025-03-23 13:36:28.698275 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698281 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698287 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698293 | orchestrator | 2025-03-23 13:36:28.698299 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.698305 | orchestrator | Sunday 23 March 2025 13:26:57 +0000 (0:00:00.441) 0:05:25.729 ********** 2025-03-23 13:36:28.698311 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698318 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698324 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698330 | orchestrator | 2025-03-23 13:36:28.698336 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.698342 | orchestrator | Sunday 23 March 2025 13:26:58 +0000 (0:00:00.872) 0:05:26.601 ********** 2025-03-23 13:36:28.698348 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698354 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698360 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698366 | orchestrator | 2025-03-23 13:36:28.698372 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.698416 | orchestrator | Sunday 23 March 2025 13:26:58 +0000 (0:00:00.370) 0:05:26.972 ********** 2025-03-23 13:36:28.698425 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698432 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698438 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698444 | orchestrator | 2025-03-23 13:36:28.698450 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.698457 | orchestrator | Sunday 23 March 2025 13:26:59 +0000 (0:00:00.491) 0:05:27.463 ********** 2025-03-23 13:36:28.698463 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698469 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698475 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698481 | orchestrator | 2025-03-23 13:36:28.698488 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.698498 | orchestrator | Sunday 23 March 2025 13:26:59 +0000 (0:00:00.502) 0:05:27.965 ********** 2025-03-23 13:36:28.698504 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698510 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698517 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698523 | orchestrator | 2025-03-23 13:36:28.698529 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.698535 | orchestrator | Sunday 23 March 2025 13:27:00 +0000 (0:00:00.993) 0:05:28.958 ********** 2025-03-23 13:36:28.698542 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698548 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698554 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698560 | orchestrator | 2025-03-23 13:36:28.698566 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.698573 | orchestrator | Sunday 23 March 2025 13:27:01 +0000 (0:00:00.521) 0:05:29.479 ********** 2025-03-23 13:36:28.698579 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698588 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698594 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698600 | orchestrator | 2025-03-23 13:36:28.698607 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.698644 | orchestrator | Sunday 23 March 2025 13:27:01 +0000 (0:00:00.463) 0:05:29.942 ********** 2025-03-23 13:36:28.698652 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698658 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698664 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698671 | orchestrator | 2025-03-23 13:36:28.698677 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.698683 | orchestrator | Sunday 23 March 2025 13:27:02 +0000 (0:00:00.438) 0:05:30.381 ********** 2025-03-23 13:36:28.698689 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.698696 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.698702 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698708 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.698714 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.698721 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698727 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.698733 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.698739 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698745 | orchestrator | 2025-03-23 13:36:28.698751 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.698757 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:00.823) 0:05:31.204 ********** 2025-03-23 13:36:28.698764 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-23 13:36:28.698770 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-23 13:36:28.698776 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698783 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-23 13:36:28.698789 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-23 13:36:28.698795 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698801 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-23 13:36:28.698808 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-23 13:36:28.698814 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698820 | orchestrator | 2025-03-23 13:36:28.698826 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.698833 | orchestrator | Sunday 23 March 2025 13:27:03 +0000 (0:00:00.520) 0:05:31.724 ********** 2025-03-23 13:36:28.698839 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698845 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698855 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698861 | orchestrator | 2025-03-23 13:36:28.698867 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.698873 | orchestrator | Sunday 23 March 2025 13:27:04 +0000 (0:00:00.437) 0:05:32.162 ********** 2025-03-23 13:36:28.698880 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698886 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698892 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698898 | orchestrator | 2025-03-23 13:36:28.698904 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.698911 | orchestrator | Sunday 23 March 2025 13:27:04 +0000 (0:00:00.621) 0:05:32.783 ********** 2025-03-23 13:36:28.698917 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698923 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698929 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698935 | orchestrator | 2025-03-23 13:36:28.698941 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.698948 | orchestrator | Sunday 23 March 2025 13:27:05 +0000 (0:00:01.039) 0:05:33.823 ********** 2025-03-23 13:36:28.698954 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.698960 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.698966 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.698973 | orchestrator | 2025-03-23 13:36:28.699015 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.699024 | orchestrator | Sunday 23 March 2025 13:27:06 +0000 (0:00:00.519) 0:05:34.343 ********** 2025-03-23 13:36:28.699030 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699036 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699042 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699049 | orchestrator | 2025-03-23 13:36:28.699055 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.699061 | orchestrator | Sunday 23 March 2025 13:27:06 +0000 (0:00:00.540) 0:05:34.883 ********** 2025-03-23 13:36:28.699077 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699083 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699089 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699096 | orchestrator | 2025-03-23 13:36:28.699102 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.699108 | orchestrator | Sunday 23 March 2025 13:27:07 +0000 (0:00:00.675) 0:05:35.559 ********** 2025-03-23 13:36:28.699114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.699121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.699127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.699133 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699139 | orchestrator | 2025-03-23 13:36:28.699145 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.699152 | orchestrator | Sunday 23 March 2025 13:27:08 +0000 (0:00:01.247) 0:05:36.807 ********** 2025-03-23 13:36:28.699158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.699164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.699170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.699176 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699182 | orchestrator | 2025-03-23 13:36:28.699188 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.699194 | orchestrator | Sunday 23 March 2025 13:27:09 +0000 (0:00:00.757) 0:05:37.565 ********** 2025-03-23 13:36:28.699200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.699206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.699212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.699222 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699228 | orchestrator | 2025-03-23 13:36:28.699233 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.699239 | orchestrator | Sunday 23 March 2025 13:27:10 +0000 (0:00:00.804) 0:05:38.370 ********** 2025-03-23 13:36:28.699245 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699251 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699257 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699262 | orchestrator | 2025-03-23 13:36:28.699268 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.699277 | orchestrator | Sunday 23 March 2025 13:27:10 +0000 (0:00:00.610) 0:05:38.980 ********** 2025-03-23 13:36:28.699283 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.699289 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699295 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.699301 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699308 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.699314 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699320 | orchestrator | 2025-03-23 13:36:28.699326 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.699332 | orchestrator | Sunday 23 March 2025 13:27:11 +0000 (0:00:00.754) 0:05:39.735 ********** 2025-03-23 13:36:28.699338 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699345 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699351 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699357 | orchestrator | 2025-03-23 13:36:28.699363 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.699369 | orchestrator | Sunday 23 March 2025 13:27:12 +0000 (0:00:00.848) 0:05:40.583 ********** 2025-03-23 13:36:28.699375 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699382 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699388 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699394 | orchestrator | 2025-03-23 13:36:28.699400 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.699406 | orchestrator | Sunday 23 March 2025 13:27:12 +0000 (0:00:00.453) 0:05:41.037 ********** 2025-03-23 13:36:28.699413 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.699419 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699425 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.699431 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699437 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.699443 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699449 | orchestrator | 2025-03-23 13:36:28.699456 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.699462 | orchestrator | Sunday 23 March 2025 13:27:13 +0000 (0:00:00.682) 0:05:41.719 ********** 2025-03-23 13:36:28.699468 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699474 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699480 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699486 | orchestrator | 2025-03-23 13:36:28.699493 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.699499 | orchestrator | Sunday 23 March 2025 13:27:14 +0000 (0:00:00.560) 0:05:42.279 ********** 2025-03-23 13:36:28.699505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.699511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.699517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.699524 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:36:28.699543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:36:28.699551 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:36:28.699560 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699570 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699576 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:36:28.699585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:36:28.699592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:36:28.699598 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699604 | orchestrator | 2025-03-23 13:36:28.699610 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.699629 | orchestrator | Sunday 23 March 2025 13:27:15 +0000 (0:00:01.546) 0:05:43.826 ********** 2025-03-23 13:36:28.699635 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699641 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699647 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699653 | orchestrator | 2025-03-23 13:36:28.699660 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.699666 | orchestrator | Sunday 23 March 2025 13:27:16 +0000 (0:00:00.728) 0:05:44.555 ********** 2025-03-23 13:36:28.699673 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699679 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699686 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699692 | orchestrator | 2025-03-23 13:36:28.699698 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.699705 | orchestrator | Sunday 23 March 2025 13:27:17 +0000 (0:00:01.042) 0:05:45.598 ********** 2025-03-23 13:36:28.699711 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699718 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699724 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699730 | orchestrator | 2025-03-23 13:36:28.699737 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.699743 | orchestrator | Sunday 23 March 2025 13:27:18 +0000 (0:00:00.650) 0:05:46.248 ********** 2025-03-23 13:36:28.699749 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699756 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.699762 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.699768 | orchestrator | 2025-03-23 13:36:28.699775 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-03-23 13:36:28.699781 | orchestrator | Sunday 23 March 2025 13:27:19 +0000 (0:00:01.293) 0:05:47.542 ********** 2025-03-23 13:36:28.699788 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.699794 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.699801 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.699807 | orchestrator | 2025-03-23 13:36:28.699814 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-03-23 13:36:28.699820 | orchestrator | Sunday 23 March 2025 13:27:19 +0000 (0:00:00.543) 0:05:48.086 ********** 2025-03-23 13:36:28.699827 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.699833 | orchestrator | 2025-03-23 13:36:28.699839 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-03-23 13:36:28.699846 | orchestrator | Sunday 23 March 2025 13:27:21 +0000 (0:00:01.119) 0:05:49.205 ********** 2025-03-23 13:36:28.699852 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.699858 | orchestrator | 2025-03-23 13:36:28.699865 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-03-23 13:36:28.699871 | orchestrator | Sunday 23 March 2025 13:27:21 +0000 (0:00:00.213) 0:05:49.419 ********** 2025-03-23 13:36:28.699878 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-23 13:36:28.699884 | orchestrator | 2025-03-23 13:36:28.699890 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-03-23 13:36:28.699897 | orchestrator | Sunday 23 March 2025 13:27:22 +0000 (0:00:00.980) 0:05:50.399 ********** 2025-03-23 13:36:28.699903 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.699915 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.699922 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.699928 | orchestrator | 2025-03-23 13:36:28.699935 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-03-23 13:36:28.699941 | orchestrator | Sunday 23 March 2025 13:27:22 +0000 (0:00:00.463) 0:05:50.863 ********** 2025-03-23 13:36:28.699947 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.699954 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.699960 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.699967 | orchestrator | 2025-03-23 13:36:28.699973 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-03-23 13:36:28.699982 | orchestrator | Sunday 23 March 2025 13:27:23 +0000 (0:00:00.487) 0:05:51.350 ********** 2025-03-23 13:36:28.699989 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.699996 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700003 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700009 | orchestrator | 2025-03-23 13:36:28.700015 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-03-23 13:36:28.700021 | orchestrator | Sunday 23 March 2025 13:27:24 +0000 (0:00:01.626) 0:05:52.977 ********** 2025-03-23 13:36:28.700026 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700032 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700038 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700044 | orchestrator | 2025-03-23 13:36:28.700049 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-03-23 13:36:28.700055 | orchestrator | Sunday 23 March 2025 13:27:25 +0000 (0:00:00.885) 0:05:53.863 ********** 2025-03-23 13:36:28.700061 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700067 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700073 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700078 | orchestrator | 2025-03-23 13:36:28.700084 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-03-23 13:36:28.700090 | orchestrator | Sunday 23 March 2025 13:27:26 +0000 (0:00:00.834) 0:05:54.697 ********** 2025-03-23 13:36:28.700095 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700101 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700107 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700113 | orchestrator | 2025-03-23 13:36:28.700133 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-03-23 13:36:28.700140 | orchestrator | Sunday 23 March 2025 13:27:27 +0000 (0:00:01.017) 0:05:55.715 ********** 2025-03-23 13:36:28.700146 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700152 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700158 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700163 | orchestrator | 2025-03-23 13:36:28.700169 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-03-23 13:36:28.700175 | orchestrator | Sunday 23 March 2025 13:27:28 +0000 (0:00:00.465) 0:05:56.180 ********** 2025-03-23 13:36:28.700181 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700187 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700193 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700199 | orchestrator | 2025-03-23 13:36:28.700204 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-03-23 13:36:28.700210 | orchestrator | Sunday 23 March 2025 13:27:28 +0000 (0:00:00.475) 0:05:56.655 ********** 2025-03-23 13:36:28.700216 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700222 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700228 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700234 | orchestrator | 2025-03-23 13:36:28.700239 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-03-23 13:36:28.700245 | orchestrator | Sunday 23 March 2025 13:27:28 +0000 (0:00:00.420) 0:05:57.076 ********** 2025-03-23 13:36:28.700251 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700257 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700263 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700272 | orchestrator | 2025-03-23 13:36:28.700278 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-03-23 13:36:28.700284 | orchestrator | Sunday 23 March 2025 13:27:29 +0000 (0:00:00.881) 0:05:57.957 ********** 2025-03-23 13:36:28.700290 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700296 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700302 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700308 | orchestrator | 2025-03-23 13:36:28.700314 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-03-23 13:36:28.700320 | orchestrator | Sunday 23 March 2025 13:27:31 +0000 (0:00:01.261) 0:05:59.219 ********** 2025-03-23 13:36:28.700325 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700334 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700340 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700346 | orchestrator | 2025-03-23 13:36:28.700352 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-03-23 13:36:28.700358 | orchestrator | Sunday 23 March 2025 13:27:31 +0000 (0:00:00.338) 0:05:59.557 ********** 2025-03-23 13:36:28.700363 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.700369 | orchestrator | 2025-03-23 13:36:28.700375 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-03-23 13:36:28.700381 | orchestrator | Sunday 23 March 2025 13:27:32 +0000 (0:00:00.889) 0:06:00.447 ********** 2025-03-23 13:36:28.700387 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700392 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700398 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700404 | orchestrator | 2025-03-23 13:36:28.700410 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-03-23 13:36:28.700415 | orchestrator | Sunday 23 March 2025 13:27:32 +0000 (0:00:00.363) 0:06:00.810 ********** 2025-03-23 13:36:28.700421 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700427 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700433 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700439 | orchestrator | 2025-03-23 13:36:28.700444 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-03-23 13:36:28.700450 | orchestrator | Sunday 23 March 2025 13:27:33 +0000 (0:00:00.384) 0:06:01.194 ********** 2025-03-23 13:36:28.700456 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.700462 | orchestrator | 2025-03-23 13:36:28.700468 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-03-23 13:36:28.700474 | orchestrator | Sunday 23 March 2025 13:27:33 +0000 (0:00:00.894) 0:06:02.089 ********** 2025-03-23 13:36:28.700479 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700485 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700491 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700497 | orchestrator | 2025-03-23 13:36:28.700502 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-03-23 13:36:28.700508 | orchestrator | Sunday 23 March 2025 13:27:35 +0000 (0:00:01.446) 0:06:03.535 ********** 2025-03-23 13:36:28.700514 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700520 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700525 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700531 | orchestrator | 2025-03-23 13:36:28.700537 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-03-23 13:36:28.700545 | orchestrator | Sunday 23 March 2025 13:27:36 +0000 (0:00:01.259) 0:06:04.794 ********** 2025-03-23 13:36:28.700551 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700557 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700563 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700568 | orchestrator | 2025-03-23 13:36:28.700574 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-03-23 13:36:28.700580 | orchestrator | Sunday 23 March 2025 13:27:38 +0000 (0:00:02.044) 0:06:06.839 ********** 2025-03-23 13:36:28.700590 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700596 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700601 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700607 | orchestrator | 2025-03-23 13:36:28.700622 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-03-23 13:36:28.700629 | orchestrator | Sunday 23 March 2025 13:27:40 +0000 (0:00:02.217) 0:06:09.057 ********** 2025-03-23 13:36:28.700635 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.700640 | orchestrator | 2025-03-23 13:36:28.700661 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-03-23 13:36:28.700667 | orchestrator | Sunday 23 March 2025 13:27:41 +0000 (0:00:00.997) 0:06:10.054 ********** 2025-03-23 13:36:28.700673 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-03-23 13:36:28.700679 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700685 | orchestrator | 2025-03-23 13:36:28.700691 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-03-23 13:36:28.700697 | orchestrator | Sunday 23 March 2025 13:28:03 +0000 (0:00:21.573) 0:06:31.628 ********** 2025-03-23 13:36:28.700703 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700709 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700715 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700720 | orchestrator | 2025-03-23 13:36:28.700726 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-03-23 13:36:28.700732 | orchestrator | Sunday 23 March 2025 13:28:10 +0000 (0:00:07.459) 0:06:39.087 ********** 2025-03-23 13:36:28.700738 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700744 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.700750 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.700755 | orchestrator | 2025-03-23 13:36:28.700761 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.700767 | orchestrator | Sunday 23 March 2025 13:28:12 +0000 (0:00:01.229) 0:06:40.317 ********** 2025-03-23 13:36:28.700773 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700779 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700784 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700790 | orchestrator | 2025-03-23 13:36:28.700796 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-03-23 13:36:28.700802 | orchestrator | Sunday 23 March 2025 13:28:13 +0000 (0:00:00.868) 0:06:41.185 ********** 2025-03-23 13:36:28.700808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.700814 | orchestrator | 2025-03-23 13:36:28.700823 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-03-23 13:36:28.700829 | orchestrator | Sunday 23 March 2025 13:28:14 +0000 (0:00:00.994) 0:06:42.180 ********** 2025-03-23 13:36:28.700834 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700840 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700846 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700852 | orchestrator | 2025-03-23 13:36:28.700858 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-23 13:36:28.700864 | orchestrator | Sunday 23 March 2025 13:28:14 +0000 (0:00:00.471) 0:06:42.651 ********** 2025-03-23 13:36:28.700869 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700875 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700881 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.700887 | orchestrator | 2025-03-23 13:36:28.700893 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-03-23 13:36:28.700898 | orchestrator | Sunday 23 March 2025 13:28:15 +0000 (0:00:01.393) 0:06:44.045 ********** 2025-03-23 13:36:28.700904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.700914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.700920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.700926 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.700932 | orchestrator | 2025-03-23 13:36:28.700938 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-03-23 13:36:28.700944 | orchestrator | Sunday 23 March 2025 13:28:17 +0000 (0:00:01.558) 0:06:45.603 ********** 2025-03-23 13:36:28.700950 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.700956 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.700961 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.700967 | orchestrator | 2025-03-23 13:36:28.700973 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.700979 | orchestrator | Sunday 23 March 2025 13:28:18 +0000 (0:00:00.591) 0:06:46.195 ********** 2025-03-23 13:36:28.700985 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.700991 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.700997 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.701002 | orchestrator | 2025-03-23 13:36:28.701008 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-03-23 13:36:28.701014 | orchestrator | 2025-03-23 13:36:28.701020 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.701026 | orchestrator | Sunday 23 March 2025 13:28:20 +0000 (0:00:02.797) 0:06:48.993 ********** 2025-03-23 13:36:28.701032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.701038 | orchestrator | 2025-03-23 13:36:28.701044 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.701049 | orchestrator | Sunday 23 March 2025 13:28:21 +0000 (0:00:00.780) 0:06:49.774 ********** 2025-03-23 13:36:28.701055 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701061 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701067 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701073 | orchestrator | 2025-03-23 13:36:28.701078 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.701084 | orchestrator | Sunday 23 March 2025 13:28:22 +0000 (0:00:00.769) 0:06:50.544 ********** 2025-03-23 13:36:28.701090 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701096 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701102 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701107 | orchestrator | 2025-03-23 13:36:28.701116 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.701122 | orchestrator | Sunday 23 March 2025 13:28:22 +0000 (0:00:00.360) 0:06:50.904 ********** 2025-03-23 13:36:28.701127 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701136 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701141 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701147 | orchestrator | 2025-03-23 13:36:28.701166 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.701173 | orchestrator | Sunday 23 March 2025 13:28:23 +0000 (0:00:00.651) 0:06:51.555 ********** 2025-03-23 13:36:28.701179 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701184 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701190 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701196 | orchestrator | 2025-03-23 13:36:28.701202 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.701208 | orchestrator | Sunday 23 March 2025 13:28:23 +0000 (0:00:00.379) 0:06:51.934 ********** 2025-03-23 13:36:28.701213 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701219 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701225 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701231 | orchestrator | 2025-03-23 13:36:28.701237 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.701242 | orchestrator | Sunday 23 March 2025 13:28:24 +0000 (0:00:00.944) 0:06:52.879 ********** 2025-03-23 13:36:28.701253 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701259 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701264 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701270 | orchestrator | 2025-03-23 13:36:28.701276 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.701282 | orchestrator | Sunday 23 March 2025 13:28:25 +0000 (0:00:00.420) 0:06:53.299 ********** 2025-03-23 13:36:28.701287 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701293 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701299 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701305 | orchestrator | 2025-03-23 13:36:28.701310 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.701316 | orchestrator | Sunday 23 March 2025 13:28:25 +0000 (0:00:00.637) 0:06:53.937 ********** 2025-03-23 13:36:28.701322 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701328 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701333 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701339 | orchestrator | 2025-03-23 13:36:28.701345 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.701351 | orchestrator | Sunday 23 March 2025 13:28:26 +0000 (0:00:00.393) 0:06:54.331 ********** 2025-03-23 13:36:28.701357 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701362 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701368 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701374 | orchestrator | 2025-03-23 13:36:28.701380 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.701386 | orchestrator | Sunday 23 March 2025 13:28:26 +0000 (0:00:00.356) 0:06:54.688 ********** 2025-03-23 13:36:28.701391 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701397 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701403 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701409 | orchestrator | 2025-03-23 13:36:28.701415 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.701420 | orchestrator | Sunday 23 March 2025 13:28:26 +0000 (0:00:00.349) 0:06:55.037 ********** 2025-03-23 13:36:28.701426 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701432 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701438 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701444 | orchestrator | 2025-03-23 13:36:28.701450 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.701455 | orchestrator | Sunday 23 March 2025 13:28:28 +0000 (0:00:01.141) 0:06:56.179 ********** 2025-03-23 13:36:28.701461 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701467 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701472 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701478 | orchestrator | 2025-03-23 13:36:28.701484 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.701490 | orchestrator | Sunday 23 March 2025 13:28:28 +0000 (0:00:00.360) 0:06:56.539 ********** 2025-03-23 13:36:28.701496 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701502 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701507 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701513 | orchestrator | 2025-03-23 13:36:28.701519 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.701525 | orchestrator | Sunday 23 March 2025 13:28:28 +0000 (0:00:00.386) 0:06:56.925 ********** 2025-03-23 13:36:28.701531 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701536 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701542 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701548 | orchestrator | 2025-03-23 13:36:28.701554 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.701559 | orchestrator | Sunday 23 March 2025 13:28:29 +0000 (0:00:00.350) 0:06:57.275 ********** 2025-03-23 13:36:28.701568 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701574 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701580 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701586 | orchestrator | 2025-03-23 13:36:28.701592 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.701597 | orchestrator | Sunday 23 March 2025 13:28:29 +0000 (0:00:00.702) 0:06:57.978 ********** 2025-03-23 13:36:28.701603 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701609 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701643 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701649 | orchestrator | 2025-03-23 13:36:28.701655 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.701661 | orchestrator | Sunday 23 March 2025 13:28:30 +0000 (0:00:00.355) 0:06:58.333 ********** 2025-03-23 13:36:28.701667 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701673 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701679 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701684 | orchestrator | 2025-03-23 13:36:28.701690 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.701699 | orchestrator | Sunday 23 March 2025 13:28:30 +0000 (0:00:00.362) 0:06:58.696 ********** 2025-03-23 13:36:28.701705 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701711 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701717 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701722 | orchestrator | 2025-03-23 13:36:28.701743 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.701750 | orchestrator | Sunday 23 March 2025 13:28:31 +0000 (0:00:00.774) 0:06:59.470 ********** 2025-03-23 13:36:28.701756 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701762 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701768 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701774 | orchestrator | 2025-03-23 13:36:28.701780 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.701786 | orchestrator | Sunday 23 March 2025 13:28:31 +0000 (0:00:00.585) 0:07:00.055 ********** 2025-03-23 13:36:28.701791 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.701800 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.701806 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.701811 | orchestrator | 2025-03-23 13:36:28.701817 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.701823 | orchestrator | Sunday 23 March 2025 13:28:32 +0000 (0:00:00.461) 0:07:00.517 ********** 2025-03-23 13:36:28.701829 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701835 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701840 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701846 | orchestrator | 2025-03-23 13:36:28.701852 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.701858 | orchestrator | Sunday 23 March 2025 13:28:32 +0000 (0:00:00.410) 0:07:00.927 ********** 2025-03-23 13:36:28.701864 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701869 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701875 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701881 | orchestrator | 2025-03-23 13:36:28.701887 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.701893 | orchestrator | Sunday 23 March 2025 13:28:33 +0000 (0:00:00.943) 0:07:01.871 ********** 2025-03-23 13:36:28.701898 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701904 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701910 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701916 | orchestrator | 2025-03-23 13:36:28.701921 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.701927 | orchestrator | Sunday 23 March 2025 13:28:34 +0000 (0:00:00.445) 0:07:02.316 ********** 2025-03-23 13:36:28.701933 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701939 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701948 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701954 | orchestrator | 2025-03-23 13:36:28.701960 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.701966 | orchestrator | Sunday 23 March 2025 13:28:34 +0000 (0:00:00.567) 0:07:02.884 ********** 2025-03-23 13:36:28.701971 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.701977 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.701983 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.701989 | orchestrator | 2025-03-23 13:36:28.701995 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.702000 | orchestrator | Sunday 23 March 2025 13:28:35 +0000 (0:00:00.416) 0:07:03.300 ********** 2025-03-23 13:36:28.702006 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702012 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702033 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702038 | orchestrator | 2025-03-23 13:36:28.702043 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.702049 | orchestrator | Sunday 23 March 2025 13:28:35 +0000 (0:00:00.714) 0:07:04.015 ********** 2025-03-23 13:36:28.702054 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702059 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702065 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702070 | orchestrator | 2025-03-23 13:36:28.702075 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.702081 | orchestrator | Sunday 23 March 2025 13:28:36 +0000 (0:00:00.461) 0:07:04.476 ********** 2025-03-23 13:36:28.702086 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702091 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702097 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702102 | orchestrator | 2025-03-23 13:36:28.702107 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.702112 | orchestrator | Sunday 23 March 2025 13:28:36 +0000 (0:00:00.505) 0:07:04.982 ********** 2025-03-23 13:36:28.702118 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702123 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702128 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702134 | orchestrator | 2025-03-23 13:36:28.702139 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.702144 | orchestrator | Sunday 23 March 2025 13:28:37 +0000 (0:00:00.520) 0:07:05.502 ********** 2025-03-23 13:36:28.702150 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702155 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702160 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702165 | orchestrator | 2025-03-23 13:36:28.702171 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.702176 | orchestrator | Sunday 23 March 2025 13:28:38 +0000 (0:00:00.854) 0:07:06.356 ********** 2025-03-23 13:36:28.702181 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702186 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702192 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702197 | orchestrator | 2025-03-23 13:36:28.702202 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.702207 | orchestrator | Sunday 23 March 2025 13:28:38 +0000 (0:00:00.409) 0:07:06.766 ********** 2025-03-23 13:36:28.702213 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702218 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702223 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702228 | orchestrator | 2025-03-23 13:36:28.702234 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.702239 | orchestrator | Sunday 23 March 2025 13:28:39 +0000 (0:00:00.454) 0:07:07.221 ********** 2025-03-23 13:36:28.702257 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.702269 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.702275 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702280 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.702285 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.702290 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702296 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.702301 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.702306 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702311 | orchestrator | 2025-03-23 13:36:28.702317 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.702322 | orchestrator | Sunday 23 March 2025 13:28:39 +0000 (0:00:00.689) 0:07:07.911 ********** 2025-03-23 13:36:28.702327 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-23 13:36:28.702333 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-23 13:36:28.702338 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-23 13:36:28.702343 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-23 13:36:28.702348 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702354 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702359 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-23 13:36:28.702364 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-23 13:36:28.702369 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702375 | orchestrator | 2025-03-23 13:36:28.702380 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.702385 | orchestrator | Sunday 23 March 2025 13:28:40 +0000 (0:00:00.813) 0:07:08.724 ********** 2025-03-23 13:36:28.702390 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702396 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702401 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702406 | orchestrator | 2025-03-23 13:36:28.702411 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.702419 | orchestrator | Sunday 23 March 2025 13:28:40 +0000 (0:00:00.411) 0:07:09.136 ********** 2025-03-23 13:36:28.702424 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702430 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702435 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702440 | orchestrator | 2025-03-23 13:36:28.702446 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.702451 | orchestrator | Sunday 23 March 2025 13:28:41 +0000 (0:00:00.474) 0:07:09.610 ********** 2025-03-23 13:36:28.702456 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702464 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702469 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702474 | orchestrator | 2025-03-23 13:36:28.702480 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.702485 | orchestrator | Sunday 23 March 2025 13:28:41 +0000 (0:00:00.491) 0:07:10.102 ********** 2025-03-23 13:36:28.702490 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702495 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702501 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702506 | orchestrator | 2025-03-23 13:36:28.702511 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.702516 | orchestrator | Sunday 23 March 2025 13:28:42 +0000 (0:00:00.697) 0:07:10.799 ********** 2025-03-23 13:36:28.702522 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702527 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702532 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702537 | orchestrator | 2025-03-23 13:36:28.702543 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.702551 | orchestrator | Sunday 23 March 2025 13:28:43 +0000 (0:00:00.425) 0:07:11.224 ********** 2025-03-23 13:36:28.702556 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702561 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702567 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702572 | orchestrator | 2025-03-23 13:36:28.702577 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.702582 | orchestrator | Sunday 23 March 2025 13:28:43 +0000 (0:00:00.369) 0:07:11.594 ********** 2025-03-23 13:36:28.702588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.702593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.702598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.702603 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702609 | orchestrator | 2025-03-23 13:36:28.702625 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.702631 | orchestrator | Sunday 23 March 2025 13:28:43 +0000 (0:00:00.454) 0:07:12.049 ********** 2025-03-23 13:36:28.702636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.702641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.702647 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.702652 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702657 | orchestrator | 2025-03-23 13:36:28.702662 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.702668 | orchestrator | Sunday 23 March 2025 13:28:44 +0000 (0:00:00.509) 0:07:12.558 ********** 2025-03-23 13:36:28.702673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.702678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.702683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.702689 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702694 | orchestrator | 2025-03-23 13:36:28.702699 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.702720 | orchestrator | Sunday 23 March 2025 13:28:45 +0000 (0:00:00.795) 0:07:13.354 ********** 2025-03-23 13:36:28.702726 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702732 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702737 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702742 | orchestrator | 2025-03-23 13:36:28.702747 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.702753 | orchestrator | Sunday 23 March 2025 13:28:45 +0000 (0:00:00.724) 0:07:14.078 ********** 2025-03-23 13:36:28.702758 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.702763 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702769 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.702774 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702779 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.702784 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702789 | orchestrator | 2025-03-23 13:36:28.702795 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.702800 | orchestrator | Sunday 23 March 2025 13:28:46 +0000 (0:00:00.534) 0:07:14.613 ********** 2025-03-23 13:36:28.702805 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702810 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702816 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702821 | orchestrator | 2025-03-23 13:36:28.702826 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.702832 | orchestrator | Sunday 23 March 2025 13:28:46 +0000 (0:00:00.398) 0:07:15.011 ********** 2025-03-23 13:36:28.702837 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702842 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702850 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702856 | orchestrator | 2025-03-23 13:36:28.702861 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.702866 | orchestrator | Sunday 23 March 2025 13:28:47 +0000 (0:00:00.759) 0:07:15.771 ********** 2025-03-23 13:36:28.702871 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.702877 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702882 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.702887 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702892 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.702898 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702903 | orchestrator | 2025-03-23 13:36:28.702908 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.702913 | orchestrator | Sunday 23 March 2025 13:28:48 +0000 (0:00:00.665) 0:07:16.437 ********** 2025-03-23 13:36:28.702918 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702924 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702929 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.702934 | orchestrator | 2025-03-23 13:36:28.702940 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.702945 | orchestrator | Sunday 23 March 2025 13:28:48 +0000 (0:00:00.400) 0:07:16.837 ********** 2025-03-23 13:36:28.702950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.702955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.702960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.702966 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:36:28.702971 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:36:28.702976 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:36:28.702981 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.702987 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.702992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:36:28.702997 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:36:28.703002 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:36:28.703008 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703013 | orchestrator | 2025-03-23 13:36:28.703018 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.703023 | orchestrator | Sunday 23 March 2025 13:28:49 +0000 (0:00:01.228) 0:07:18.066 ********** 2025-03-23 13:36:28.703029 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703034 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703039 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703044 | orchestrator | 2025-03-23 13:36:28.703050 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.703055 | orchestrator | Sunday 23 March 2025 13:28:50 +0000 (0:00:00.688) 0:07:18.754 ********** 2025-03-23 13:36:28.703060 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703065 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703070 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703076 | orchestrator | 2025-03-23 13:36:28.703083 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.703088 | orchestrator | Sunday 23 March 2025 13:28:51 +0000 (0:00:00.963) 0:07:19.718 ********** 2025-03-23 13:36:28.703094 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703099 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703104 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703109 | orchestrator | 2025-03-23 13:36:28.703115 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.703120 | orchestrator | Sunday 23 March 2025 13:28:52 +0000 (0:00:00.620) 0:07:20.338 ********** 2025-03-23 13:36:28.703138 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703143 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703148 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703154 | orchestrator | 2025-03-23 13:36:28.703159 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-03-23 13:36:28.703164 | orchestrator | Sunday 23 March 2025 13:28:53 +0000 (0:00:01.086) 0:07:21.425 ********** 2025-03-23 13:36:28.703169 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:36:28.703187 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.703193 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.703199 | orchestrator | 2025-03-23 13:36:28.703204 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-03-23 13:36:28.703209 | orchestrator | Sunday 23 March 2025 13:28:53 +0000 (0:00:00.659) 0:07:22.084 ********** 2025-03-23 13:36:28.703214 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.703220 | orchestrator | 2025-03-23 13:36:28.703225 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-03-23 13:36:28.703230 | orchestrator | Sunday 23 March 2025 13:28:54 +0000 (0:00:00.532) 0:07:22.616 ********** 2025-03-23 13:36:28.703235 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.703241 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.703246 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.703251 | orchestrator | 2025-03-23 13:36:28.703256 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-03-23 13:36:28.703262 | orchestrator | Sunday 23 March 2025 13:28:55 +0000 (0:00:00.699) 0:07:23.316 ********** 2025-03-23 13:36:28.703267 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703272 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703280 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703286 | orchestrator | 2025-03-23 13:36:28.703291 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-03-23 13:36:28.703296 | orchestrator | Sunday 23 March 2025 13:28:55 +0000 (0:00:00.715) 0:07:24.031 ********** 2025-03-23 13:36:28.703302 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:36:28.703307 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:36:28.703312 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:36:28.703318 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-03-23 13:36:28.703323 | orchestrator | 2025-03-23 13:36:28.703328 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-03-23 13:36:28.703334 | orchestrator | Sunday 23 March 2025 13:29:04 +0000 (0:00:08.508) 0:07:32.540 ********** 2025-03-23 13:36:28.703339 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.703344 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.703350 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.703355 | orchestrator | 2025-03-23 13:36:28.703360 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-03-23 13:36:28.703365 | orchestrator | Sunday 23 March 2025 13:29:05 +0000 (0:00:00.743) 0:07:33.284 ********** 2025-03-23 13:36:28.703371 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-23 13:36:28.703376 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-23 13:36:28.703381 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-23 13:36:28.703387 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-03-23 13:36:28.703392 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:36:28.703397 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:36:28.703402 | orchestrator | 2025-03-23 13:36:28.703408 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-03-23 13:36:28.703413 | orchestrator | Sunday 23 March 2025 13:29:07 +0000 (0:00:01.997) 0:07:35.282 ********** 2025-03-23 13:36:28.703423 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-23 13:36:28.703428 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-23 13:36:28.703433 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-23 13:36:28.703439 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:36:28.703444 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-23 13:36:28.703449 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-23 13:36:28.703454 | orchestrator | 2025-03-23 13:36:28.703459 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-03-23 13:36:28.703465 | orchestrator | Sunday 23 March 2025 13:29:08 +0000 (0:00:01.481) 0:07:36.763 ********** 2025-03-23 13:36:28.703470 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.703475 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.703481 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.703486 | orchestrator | 2025-03-23 13:36:28.703491 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-03-23 13:36:28.703496 | orchestrator | Sunday 23 March 2025 13:29:09 +0000 (0:00:01.202) 0:07:37.966 ********** 2025-03-23 13:36:28.703502 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703507 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703512 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703517 | orchestrator | 2025-03-23 13:36:28.703522 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-03-23 13:36:28.703528 | orchestrator | Sunday 23 March 2025 13:29:10 +0000 (0:00:00.417) 0:07:38.384 ********** 2025-03-23 13:36:28.703533 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703538 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703543 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703549 | orchestrator | 2025-03-23 13:36:28.703554 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-03-23 13:36:28.703559 | orchestrator | Sunday 23 March 2025 13:29:10 +0000 (0:00:00.503) 0:07:38.887 ********** 2025-03-23 13:36:28.703567 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.703572 | orchestrator | 2025-03-23 13:36:28.703580 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-03-23 13:36:28.703585 | orchestrator | Sunday 23 March 2025 13:29:11 +0000 (0:00:01.042) 0:07:39.930 ********** 2025-03-23 13:36:28.703591 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703596 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703601 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703606 | orchestrator | 2025-03-23 13:36:28.703611 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-03-23 13:36:28.703639 | orchestrator | Sunday 23 March 2025 13:29:12 +0000 (0:00:00.385) 0:07:40.315 ********** 2025-03-23 13:36:28.703646 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703651 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703656 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.703662 | orchestrator | 2025-03-23 13:36:28.703667 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-03-23 13:36:28.703672 | orchestrator | Sunday 23 March 2025 13:29:12 +0000 (0:00:00.441) 0:07:40.756 ********** 2025-03-23 13:36:28.703677 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.703683 | orchestrator | 2025-03-23 13:36:28.703688 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-03-23 13:36:28.703693 | orchestrator | Sunday 23 March 2025 13:29:13 +0000 (0:00:01.132) 0:07:41.889 ********** 2025-03-23 13:36:28.703699 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.703704 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.703709 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.703714 | orchestrator | 2025-03-23 13:36:28.703719 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-03-23 13:36:28.703729 | orchestrator | Sunday 23 March 2025 13:29:15 +0000 (0:00:01.598) 0:07:43.487 ********** 2025-03-23 13:36:28.703734 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.703739 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.703744 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.703750 | orchestrator | 2025-03-23 13:36:28.703755 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-03-23 13:36:28.703760 | orchestrator | Sunday 23 March 2025 13:29:16 +0000 (0:00:01.316) 0:07:44.804 ********** 2025-03-23 13:36:28.703766 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.703771 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.703776 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.703781 | orchestrator | 2025-03-23 13:36:28.703787 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-03-23 13:36:28.703792 | orchestrator | Sunday 23 March 2025 13:29:18 +0000 (0:00:02.083) 0:07:46.887 ********** 2025-03-23 13:36:28.703797 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.703802 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.703808 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.703813 | orchestrator | 2025-03-23 13:36:28.703818 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-03-23 13:36:28.703823 | orchestrator | Sunday 23 March 2025 13:29:20 +0000 (0:00:02.076) 0:07:48.963 ********** 2025-03-23 13:36:28.703829 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.703834 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.703839 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-03-23 13:36:28.703844 | orchestrator | 2025-03-23 13:36:28.703850 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-03-23 13:36:28.703855 | orchestrator | Sunday 23 March 2025 13:29:21 +0000 (0:00:00.586) 0:07:49.550 ********** 2025-03-23 13:36:28.703860 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-03-23 13:36:28.703865 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-03-23 13:36:28.703871 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.703876 | orchestrator | 2025-03-23 13:36:28.703881 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-03-23 13:36:28.703886 | orchestrator | Sunday 23 March 2025 13:29:35 +0000 (0:00:13.806) 0:08:03.357 ********** 2025-03-23 13:36:28.703892 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.703897 | orchestrator | 2025-03-23 13:36:28.703902 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-03-23 13:36:28.703907 | orchestrator | Sunday 23 March 2025 13:29:37 +0000 (0:00:01.933) 0:08:05.290 ********** 2025-03-23 13:36:28.703913 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.703918 | orchestrator | 2025-03-23 13:36:28.703923 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-03-23 13:36:28.703928 | orchestrator | Sunday 23 March 2025 13:29:37 +0000 (0:00:00.576) 0:08:05.866 ********** 2025-03-23 13:36:28.703933 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.703939 | orchestrator | 2025-03-23 13:36:28.703944 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-03-23 13:36:28.703949 | orchestrator | Sunday 23 March 2025 13:29:38 +0000 (0:00:00.372) 0:08:06.239 ********** 2025-03-23 13:36:28.703954 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-03-23 13:36:28.703960 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-03-23 13:36:28.703965 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-03-23 13:36:28.703970 | orchestrator | 2025-03-23 13:36:28.703975 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-03-23 13:36:28.703986 | orchestrator | Sunday 23 March 2025 13:29:44 +0000 (0:00:06.572) 0:08:12.811 ********** 2025-03-23 13:36:28.703992 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-03-23 13:36:28.703997 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-03-23 13:36:28.704002 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-03-23 13:36:28.704007 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-03-23 13:36:28.704013 | orchestrator | 2025-03-23 13:36:28.704018 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.704023 | orchestrator | Sunday 23 March 2025 13:29:49 +0000 (0:00:05.077) 0:08:17.889 ********** 2025-03-23 13:36:28.704028 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.704034 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.704039 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.704044 | orchestrator | 2025-03-23 13:36:28.704061 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-23 13:36:28.704068 | orchestrator | Sunday 23 March 2025 13:29:50 +0000 (0:00:00.825) 0:08:18.715 ********** 2025-03-23 13:36:28.704073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:36:28.704078 | orchestrator | 2025-03-23 13:36:28.704084 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-03-23 13:36:28.704089 | orchestrator | Sunday 23 March 2025 13:29:51 +0000 (0:00:00.920) 0:08:19.635 ********** 2025-03-23 13:36:28.704094 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.704099 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.704105 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.704110 | orchestrator | 2025-03-23 13:36:28.704115 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-23 13:36:28.704120 | orchestrator | Sunday 23 March 2025 13:29:51 +0000 (0:00:00.356) 0:08:19.992 ********** 2025-03-23 13:36:28.704125 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.704131 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.704136 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.704141 | orchestrator | 2025-03-23 13:36:28.704146 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-03-23 13:36:28.704151 | orchestrator | Sunday 23 March 2025 13:29:53 +0000 (0:00:01.648) 0:08:21.641 ********** 2025-03-23 13:36:28.704157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:36:28.704162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:36:28.704167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:36:28.704172 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.704177 | orchestrator | 2025-03-23 13:36:28.704183 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-03-23 13:36:28.704188 | orchestrator | Sunday 23 March 2025 13:29:54 +0000 (0:00:00.791) 0:08:22.432 ********** 2025-03-23 13:36:28.704193 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.704198 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.704204 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.704209 | orchestrator | 2025-03-23 13:36:28.704214 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.704219 | orchestrator | Sunday 23 March 2025 13:29:54 +0000 (0:00:00.510) 0:08:22.942 ********** 2025-03-23 13:36:28.704224 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.704234 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.704239 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.704244 | orchestrator | 2025-03-23 13:36:28.704250 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-03-23 13:36:28.704255 | orchestrator | 2025-03-23 13:36:28.704260 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.704265 | orchestrator | Sunday 23 March 2025 13:29:57 +0000 (0:00:02.266) 0:08:25.209 ********** 2025-03-23 13:36:28.704274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.704279 | orchestrator | 2025-03-23 13:36:28.704284 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.704289 | orchestrator | Sunday 23 March 2025 13:29:57 +0000 (0:00:00.887) 0:08:26.097 ********** 2025-03-23 13:36:28.704295 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704300 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704305 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704310 | orchestrator | 2025-03-23 13:36:28.704316 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.704321 | orchestrator | Sunday 23 March 2025 13:29:58 +0000 (0:00:00.409) 0:08:26.507 ********** 2025-03-23 13:36:28.704326 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704331 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704337 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704342 | orchestrator | 2025-03-23 13:36:28.704347 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.704352 | orchestrator | Sunday 23 March 2025 13:29:59 +0000 (0:00:00.854) 0:08:27.361 ********** 2025-03-23 13:36:28.704357 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704363 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704368 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704373 | orchestrator | 2025-03-23 13:36:28.704378 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.704383 | orchestrator | Sunday 23 March 2025 13:30:00 +0000 (0:00:01.143) 0:08:28.505 ********** 2025-03-23 13:36:28.704389 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704394 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704399 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704404 | orchestrator | 2025-03-23 13:36:28.704409 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.704414 | orchestrator | Sunday 23 March 2025 13:30:01 +0000 (0:00:00.773) 0:08:29.279 ********** 2025-03-23 13:36:28.704420 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704425 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704430 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704435 | orchestrator | 2025-03-23 13:36:28.704441 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.704446 | orchestrator | Sunday 23 March 2025 13:30:01 +0000 (0:00:00.327) 0:08:29.606 ********** 2025-03-23 13:36:28.704451 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704456 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704461 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704467 | orchestrator | 2025-03-23 13:36:28.704474 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.704479 | orchestrator | Sunday 23 March 2025 13:30:02 +0000 (0:00:00.629) 0:08:30.235 ********** 2025-03-23 13:36:28.704485 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704490 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704495 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704501 | orchestrator | 2025-03-23 13:36:28.704506 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.704523 | orchestrator | Sunday 23 March 2025 13:30:02 +0000 (0:00:00.404) 0:08:30.639 ********** 2025-03-23 13:36:28.704529 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704534 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704540 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704545 | orchestrator | 2025-03-23 13:36:28.704550 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.704556 | orchestrator | Sunday 23 March 2025 13:30:02 +0000 (0:00:00.401) 0:08:31.041 ********** 2025-03-23 13:36:28.704561 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704566 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704574 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704580 | orchestrator | 2025-03-23 13:36:28.704585 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.704590 | orchestrator | Sunday 23 March 2025 13:30:03 +0000 (0:00:00.388) 0:08:31.430 ********** 2025-03-23 13:36:28.704595 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704601 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704606 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704611 | orchestrator | 2025-03-23 13:36:28.704643 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.704648 | orchestrator | Sunday 23 March 2025 13:30:03 +0000 (0:00:00.657) 0:08:32.087 ********** 2025-03-23 13:36:28.704654 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704659 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704665 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704670 | orchestrator | 2025-03-23 13:36:28.704675 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.704680 | orchestrator | Sunday 23 March 2025 13:30:04 +0000 (0:00:00.946) 0:08:33.034 ********** 2025-03-23 13:36:28.704686 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704691 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704696 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704702 | orchestrator | 2025-03-23 13:36:28.704707 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.704712 | orchestrator | Sunday 23 March 2025 13:30:05 +0000 (0:00:00.364) 0:08:33.399 ********** 2025-03-23 13:36:28.704718 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704723 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704728 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704733 | orchestrator | 2025-03-23 13:36:28.704739 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.704744 | orchestrator | Sunday 23 March 2025 13:30:05 +0000 (0:00:00.401) 0:08:33.800 ********** 2025-03-23 13:36:28.704749 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704755 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704760 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704765 | orchestrator | 2025-03-23 13:36:28.704771 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.704776 | orchestrator | Sunday 23 March 2025 13:30:06 +0000 (0:00:00.679) 0:08:34.480 ********** 2025-03-23 13:36:28.704781 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704787 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704792 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704797 | orchestrator | 2025-03-23 13:36:28.704802 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.704808 | orchestrator | Sunday 23 March 2025 13:30:06 +0000 (0:00:00.427) 0:08:34.908 ********** 2025-03-23 13:36:28.704813 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704818 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704824 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704829 | orchestrator | 2025-03-23 13:36:28.704834 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.704839 | orchestrator | Sunday 23 March 2025 13:30:07 +0000 (0:00:00.527) 0:08:35.435 ********** 2025-03-23 13:36:28.704845 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704853 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704858 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704864 | orchestrator | 2025-03-23 13:36:28.704869 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.704874 | orchestrator | Sunday 23 March 2025 13:30:07 +0000 (0:00:00.415) 0:08:35.851 ********** 2025-03-23 13:36:28.704879 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704885 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704890 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704898 | orchestrator | 2025-03-23 13:36:28.704904 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.704909 | orchestrator | Sunday 23 March 2025 13:30:08 +0000 (0:00:00.755) 0:08:36.606 ********** 2025-03-23 13:36:28.704914 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704919 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704925 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704930 | orchestrator | 2025-03-23 13:36:28.704935 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.704940 | orchestrator | Sunday 23 March 2025 13:30:08 +0000 (0:00:00.471) 0:08:37.078 ********** 2025-03-23 13:36:28.704946 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.704951 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.704956 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.704961 | orchestrator | 2025-03-23 13:36:28.704966 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.704972 | orchestrator | Sunday 23 March 2025 13:30:09 +0000 (0:00:00.588) 0:08:37.667 ********** 2025-03-23 13:36:28.704977 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.704982 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.704987 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.704992 | orchestrator | 2025-03-23 13:36:28.704998 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.705003 | orchestrator | Sunday 23 March 2025 13:30:09 +0000 (0:00:00.409) 0:08:38.076 ********** 2025-03-23 13:36:28.705008 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705013 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705018 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705023 | orchestrator | 2025-03-23 13:36:28.705031 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.705048 | orchestrator | Sunday 23 March 2025 13:30:10 +0000 (0:00:00.730) 0:08:38.806 ********** 2025-03-23 13:36:28.705054 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705059 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705064 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705068 | orchestrator | 2025-03-23 13:36:28.705073 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.705078 | orchestrator | Sunday 23 March 2025 13:30:11 +0000 (0:00:00.423) 0:08:39.230 ********** 2025-03-23 13:36:28.705083 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705088 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705092 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705097 | orchestrator | 2025-03-23 13:36:28.705102 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.705107 | orchestrator | Sunday 23 March 2025 13:30:11 +0000 (0:00:00.456) 0:08:39.686 ********** 2025-03-23 13:36:28.705111 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705116 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705121 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705126 | orchestrator | 2025-03-23 13:36:28.705131 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.705135 | orchestrator | Sunday 23 March 2025 13:30:11 +0000 (0:00:00.361) 0:08:40.048 ********** 2025-03-23 13:36:28.705140 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705145 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705150 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705154 | orchestrator | 2025-03-23 13:36:28.705159 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.705164 | orchestrator | Sunday 23 March 2025 13:30:12 +0000 (0:00:00.686) 0:08:40.735 ********** 2025-03-23 13:36:28.705169 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705173 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705178 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705183 | orchestrator | 2025-03-23 13:36:28.705188 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.705196 | orchestrator | Sunday 23 March 2025 13:30:13 +0000 (0:00:00.443) 0:08:41.178 ********** 2025-03-23 13:36:28.705201 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705206 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705210 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705215 | orchestrator | 2025-03-23 13:36:28.705220 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.705225 | orchestrator | Sunday 23 March 2025 13:30:13 +0000 (0:00:00.521) 0:08:41.700 ********** 2025-03-23 13:36:28.705232 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705237 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705242 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705246 | orchestrator | 2025-03-23 13:36:28.705251 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.705256 | orchestrator | Sunday 23 March 2025 13:30:13 +0000 (0:00:00.390) 0:08:42.091 ********** 2025-03-23 13:36:28.705261 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705266 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705270 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705275 | orchestrator | 2025-03-23 13:36:28.705280 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.705285 | orchestrator | Sunday 23 March 2025 13:30:14 +0000 (0:00:00.754) 0:08:42.846 ********** 2025-03-23 13:36:28.705289 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705294 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705299 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705304 | orchestrator | 2025-03-23 13:36:28.705308 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.705313 | orchestrator | Sunday 23 March 2025 13:30:15 +0000 (0:00:00.373) 0:08:43.219 ********** 2025-03-23 13:36:28.705318 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705323 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705328 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705332 | orchestrator | 2025-03-23 13:36:28.705337 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.705342 | orchestrator | Sunday 23 March 2025 13:30:15 +0000 (0:00:00.384) 0:08:43.603 ********** 2025-03-23 13:36:28.705347 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.705352 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.705357 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.705362 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.705366 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705371 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705376 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.705381 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.705386 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705390 | orchestrator | 2025-03-23 13:36:28.705395 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.705400 | orchestrator | Sunday 23 March 2025 13:30:15 +0000 (0:00:00.443) 0:08:44.046 ********** 2025-03-23 13:36:28.705405 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-23 13:36:28.705412 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-23 13:36:28.705417 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705424 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-23 13:36:28.705429 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-23 13:36:28.705433 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705438 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-23 13:36:28.705447 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-23 13:36:28.705451 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705456 | orchestrator | 2025-03-23 13:36:28.705461 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.705476 | orchestrator | Sunday 23 March 2025 13:30:16 +0000 (0:00:00.784) 0:08:44.830 ********** 2025-03-23 13:36:28.705482 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705487 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705492 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705496 | orchestrator | 2025-03-23 13:36:28.705501 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.705506 | orchestrator | Sunday 23 March 2025 13:30:17 +0000 (0:00:00.411) 0:08:45.242 ********** 2025-03-23 13:36:28.705511 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705516 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705520 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705525 | orchestrator | 2025-03-23 13:36:28.705530 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.705535 | orchestrator | Sunday 23 March 2025 13:30:17 +0000 (0:00:00.407) 0:08:45.649 ********** 2025-03-23 13:36:28.705540 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705545 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705549 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705554 | orchestrator | 2025-03-23 13:36:28.705559 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.705564 | orchestrator | Sunday 23 March 2025 13:30:17 +0000 (0:00:00.371) 0:08:46.021 ********** 2025-03-23 13:36:28.705568 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705573 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705578 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705583 | orchestrator | 2025-03-23 13:36:28.705587 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.705592 | orchestrator | Sunday 23 March 2025 13:30:18 +0000 (0:00:00.722) 0:08:46.744 ********** 2025-03-23 13:36:28.705597 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705602 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705606 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705611 | orchestrator | 2025-03-23 13:36:28.705626 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.705631 | orchestrator | Sunday 23 March 2025 13:30:18 +0000 (0:00:00.397) 0:08:47.141 ********** 2025-03-23 13:36:28.705636 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705641 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705646 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705650 | orchestrator | 2025-03-23 13:36:28.705655 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.705662 | orchestrator | Sunday 23 March 2025 13:30:19 +0000 (0:00:00.464) 0:08:47.606 ********** 2025-03-23 13:36:28.705667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.705672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.705677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.705682 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705686 | orchestrator | 2025-03-23 13:36:28.705691 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.705696 | orchestrator | Sunday 23 March 2025 13:30:20 +0000 (0:00:00.675) 0:08:48.281 ********** 2025-03-23 13:36:28.705701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.705706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.705710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.705715 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705723 | orchestrator | 2025-03-23 13:36:28.705730 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.705735 | orchestrator | Sunday 23 March 2025 13:30:20 +0000 (0:00:00.639) 0:08:48.920 ********** 2025-03-23 13:36:28.705740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.705745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.705750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.705755 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705759 | orchestrator | 2025-03-23 13:36:28.705764 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.705769 | orchestrator | Sunday 23 March 2025 13:30:21 +0000 (0:00:00.884) 0:08:49.805 ********** 2025-03-23 13:36:28.705774 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705778 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705783 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705788 | orchestrator | 2025-03-23 13:36:28.705793 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.705798 | orchestrator | Sunday 23 March 2025 13:30:22 +0000 (0:00:00.531) 0:08:50.336 ********** 2025-03-23 13:36:28.705802 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.705807 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705812 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.705817 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705821 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.705826 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705831 | orchestrator | 2025-03-23 13:36:28.705836 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.705841 | orchestrator | Sunday 23 March 2025 13:30:22 +0000 (0:00:00.515) 0:08:50.852 ********** 2025-03-23 13:36:28.705845 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705850 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705855 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705860 | orchestrator | 2025-03-23 13:36:28.705864 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.705869 | orchestrator | Sunday 23 March 2025 13:30:23 +0000 (0:00:00.375) 0:08:51.228 ********** 2025-03-23 13:36:28.705874 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705878 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705883 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705888 | orchestrator | 2025-03-23 13:36:28.705904 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.705910 | orchestrator | Sunday 23 March 2025 13:30:23 +0000 (0:00:00.313) 0:08:51.541 ********** 2025-03-23 13:36:28.705914 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.705919 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705924 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.705929 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705934 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.705939 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705943 | orchestrator | 2025-03-23 13:36:28.705948 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.705953 | orchestrator | Sunday 23 March 2025 13:30:24 +0000 (0:00:00.674) 0:08:52.216 ********** 2025-03-23 13:36:28.705958 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.705963 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.705968 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.705972 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.705981 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.705989 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.705994 | orchestrator | 2025-03-23 13:36:28.705998 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.706003 | orchestrator | Sunday 23 March 2025 13:30:24 +0000 (0:00:00.404) 0:08:52.621 ********** 2025-03-23 13:36:28.706008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.706032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.706038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.706042 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.706052 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.706057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.706062 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706067 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.706071 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.706076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.706081 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706086 | orchestrator | 2025-03-23 13:36:28.706090 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.706095 | orchestrator | Sunday 23 March 2025 13:30:25 +0000 (0:00:00.623) 0:08:53.244 ********** 2025-03-23 13:36:28.706100 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706105 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706110 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706114 | orchestrator | 2025-03-23 13:36:28.706119 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.706124 | orchestrator | Sunday 23 March 2025 13:30:25 +0000 (0:00:00.726) 0:08:53.971 ********** 2025-03-23 13:36:28.706129 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.706134 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706138 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.706143 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706148 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.706153 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706157 | orchestrator | 2025-03-23 13:36:28.706162 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.706167 | orchestrator | Sunday 23 March 2025 13:30:26 +0000 (0:00:00.559) 0:08:54.531 ********** 2025-03-23 13:36:28.706172 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706177 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706202 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706207 | orchestrator | 2025-03-23 13:36:28.706212 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.706217 | orchestrator | Sunday 23 March 2025 13:30:27 +0000 (0:00:01.178) 0:08:55.709 ********** 2025-03-23 13:36:28.706222 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706227 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706231 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706236 | orchestrator | 2025-03-23 13:36:28.706241 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-03-23 13:36:28.706246 | orchestrator | Sunday 23 March 2025 13:30:28 +0000 (0:00:00.630) 0:08:56.340 ********** 2025-03-23 13:36:28.706251 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.706256 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.706260 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.706265 | orchestrator | 2025-03-23 13:36:28.706270 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-03-23 13:36:28.706278 | orchestrator | Sunday 23 March 2025 13:30:28 +0000 (0:00:00.682) 0:08:57.023 ********** 2025-03-23 13:36:28.706285 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-23 13:36:28.706290 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:36:28.706295 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:36:28.706300 | orchestrator | 2025-03-23 13:36:28.706305 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-03-23 13:36:28.706309 | orchestrator | Sunday 23 March 2025 13:30:29 +0000 (0:00:01.072) 0:08:58.095 ********** 2025-03-23 13:36:28.706327 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.706333 | orchestrator | 2025-03-23 13:36:28.706338 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-03-23 13:36:28.706343 | orchestrator | Sunday 23 March 2025 13:30:30 +0000 (0:00:00.855) 0:08:58.950 ********** 2025-03-23 13:36:28.706347 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706352 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706357 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706362 | orchestrator | 2025-03-23 13:36:28.706367 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-03-23 13:36:28.706371 | orchestrator | Sunday 23 March 2025 13:30:31 +0000 (0:00:00.656) 0:08:59.607 ********** 2025-03-23 13:36:28.706376 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706381 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706385 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706390 | orchestrator | 2025-03-23 13:36:28.706395 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-03-23 13:36:28.706400 | orchestrator | Sunday 23 March 2025 13:30:31 +0000 (0:00:00.358) 0:08:59.965 ********** 2025-03-23 13:36:28.706405 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706409 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706414 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706419 | orchestrator | 2025-03-23 13:36:28.706424 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-03-23 13:36:28.706428 | orchestrator | Sunday 23 March 2025 13:30:32 +0000 (0:00:00.446) 0:09:00.411 ********** 2025-03-23 13:36:28.706433 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706438 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706443 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706448 | orchestrator | 2025-03-23 13:36:28.706452 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-03-23 13:36:28.706457 | orchestrator | Sunday 23 March 2025 13:30:32 +0000 (0:00:00.346) 0:09:00.757 ********** 2025-03-23 13:36:28.706462 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.706467 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.706471 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.706476 | orchestrator | 2025-03-23 13:36:28.706481 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-03-23 13:36:28.706486 | orchestrator | Sunday 23 March 2025 13:30:33 +0000 (0:00:01.145) 0:09:01.903 ********** 2025-03-23 13:36:28.706490 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.706495 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.706500 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.706505 | orchestrator | 2025-03-23 13:36:28.706509 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-03-23 13:36:28.706514 | orchestrator | Sunday 23 March 2025 13:30:34 +0000 (0:00:00.690) 0:09:02.594 ********** 2025-03-23 13:36:28.706519 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-23 13:36:28.706524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-23 13:36:28.706532 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-23 13:36:28.706537 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-23 13:36:28.706541 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-23 13:36:28.706546 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-23 13:36:28.706551 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-23 13:36:28.706555 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-23 13:36:28.706560 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-23 13:36:28.706565 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-23 13:36:28.706570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-23 13:36:28.706575 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-23 13:36:28.706579 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-23 13:36:28.706584 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-23 13:36:28.706589 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-23 13:36:28.706594 | orchestrator | 2025-03-23 13:36:28.706598 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-03-23 13:36:28.706603 | orchestrator | Sunday 23 March 2025 13:30:38 +0000 (0:00:03.588) 0:09:06.182 ********** 2025-03-23 13:36:28.706608 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706635 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706644 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706649 | orchestrator | 2025-03-23 13:36:28.706654 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-03-23 13:36:28.706659 | orchestrator | Sunday 23 March 2025 13:30:38 +0000 (0:00:00.754) 0:09:06.936 ********** 2025-03-23 13:36:28.706663 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.706668 | orchestrator | 2025-03-23 13:36:28.706676 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-03-23 13:36:28.706680 | orchestrator | Sunday 23 March 2025 13:30:39 +0000 (0:00:00.692) 0:09:07.629 ********** 2025-03-23 13:36:28.706685 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-23 13:36:28.706702 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-23 13:36:28.706708 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-23 13:36:28.706713 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-03-23 13:36:28.706718 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-03-23 13:36:28.706723 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-03-23 13:36:28.706727 | orchestrator | 2025-03-23 13:36:28.706732 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-03-23 13:36:28.706737 | orchestrator | Sunday 23 March 2025 13:30:40 +0000 (0:00:01.228) 0:09:08.858 ********** 2025-03-23 13:36:28.706742 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:36:28.706747 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.706751 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-23 13:36:28.706756 | orchestrator | 2025-03-23 13:36:28.706761 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-03-23 13:36:28.706766 | orchestrator | Sunday 23 March 2025 13:30:43 +0000 (0:00:02.336) 0:09:11.195 ********** 2025-03-23 13:36:28.706771 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:36:28.706779 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.706784 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.706791 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:36:28.706796 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.706801 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.706806 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:36:28.706810 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.706815 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.706820 | orchestrator | 2025-03-23 13:36:28.706824 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-03-23 13:36:28.706829 | orchestrator | Sunday 23 March 2025 13:30:44 +0000 (0:00:01.430) 0:09:12.625 ********** 2025-03-23 13:36:28.706834 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.706839 | orchestrator | 2025-03-23 13:36:28.706843 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-03-23 13:36:28.706848 | orchestrator | Sunday 23 March 2025 13:30:46 +0000 (0:00:02.490) 0:09:15.116 ********** 2025-03-23 13:36:28.706853 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.706858 | orchestrator | 2025-03-23 13:36:28.706863 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-03-23 13:36:28.706868 | orchestrator | Sunday 23 March 2025 13:30:47 +0000 (0:00:00.853) 0:09:15.969 ********** 2025-03-23 13:36:28.706872 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706877 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706882 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706887 | orchestrator | 2025-03-23 13:36:28.706892 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-03-23 13:36:28.706896 | orchestrator | Sunday 23 March 2025 13:30:48 +0000 (0:00:00.355) 0:09:16.324 ********** 2025-03-23 13:36:28.706901 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706906 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706911 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706915 | orchestrator | 2025-03-23 13:36:28.706920 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-03-23 13:36:28.706925 | orchestrator | Sunday 23 March 2025 13:30:48 +0000 (0:00:00.337) 0:09:16.661 ********** 2025-03-23 13:36:28.706930 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.706935 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.706939 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.706944 | orchestrator | 2025-03-23 13:36:28.706949 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-03-23 13:36:28.706954 | orchestrator | Sunday 23 March 2025 13:30:48 +0000 (0:00:00.337) 0:09:16.999 ********** 2025-03-23 13:36:28.706958 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.706963 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.706968 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.706973 | orchestrator | 2025-03-23 13:36:28.706978 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-03-23 13:36:28.706983 | orchestrator | Sunday 23 March 2025 13:30:49 +0000 (0:00:00.784) 0:09:17.783 ********** 2025-03-23 13:36:28.706987 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.706992 | orchestrator | 2025-03-23 13:36:28.706997 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-03-23 13:36:28.707002 | orchestrator | Sunday 23 March 2025 13:30:50 +0000 (0:00:00.676) 0:09:18.460 ********** 2025-03-23 13:36:28.707006 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9205bfbb-9f4f-501b-85a3-60f418fff160', 'data_vg': 'ceph-9205bfbb-9f4f-501b-85a3-60f418fff160'}) 2025-03-23 13:36:28.707015 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233', 'data_vg': 'ceph-5102d35b-39ce-5a2f-80bc-7bd1ce5c8233'}) 2025-03-23 13:36:28.707020 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8229b7a0-df8d-5815-8245-22e3d24081aa', 'data_vg': 'ceph-8229b7a0-df8d-5815-8245-22e3d24081aa'}) 2025-03-23 13:36:28.707025 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5a8506d3-5e74-5dde-8df3-17f522800900', 'data_vg': 'ceph-5a8506d3-5e74-5dde-8df3-17f522800900'}) 2025-03-23 13:36:28.707041 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb', 'data_vg': 'ceph-cbe43cef-cccc-569d-93a4-8e7e2e8a94cb'}) 2025-03-23 13:36:28.707047 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ab6ed36-da2c-5faf-8aed-224e80357d25', 'data_vg': 'ceph-0ab6ed36-da2c-5faf-8aed-224e80357d25'}) 2025-03-23 13:36:28.707051 | orchestrator | 2025-03-23 13:36:28.707056 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-03-23 13:36:28.707061 | orchestrator | Sunday 23 March 2025 13:31:33 +0000 (0:00:43.029) 0:10:01.489 ********** 2025-03-23 13:36:28.707066 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707071 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707076 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707080 | orchestrator | 2025-03-23 13:36:28.707085 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-03-23 13:36:28.707090 | orchestrator | Sunday 23 March 2025 13:31:33 +0000 (0:00:00.574) 0:10:02.063 ********** 2025-03-23 13:36:28.707095 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.707100 | orchestrator | 2025-03-23 13:36:28.707104 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-03-23 13:36:28.707109 | orchestrator | Sunday 23 March 2025 13:31:34 +0000 (0:00:00.660) 0:10:02.723 ********** 2025-03-23 13:36:28.707114 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.707119 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.707123 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.707128 | orchestrator | 2025-03-23 13:36:28.707133 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-03-23 13:36:28.707138 | orchestrator | Sunday 23 March 2025 13:31:35 +0000 (0:00:00.773) 0:10:03.496 ********** 2025-03-23 13:36:28.707143 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707150 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707155 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707160 | orchestrator | 2025-03-23 13:36:28.707164 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-03-23 13:36:28.707169 | orchestrator | Sunday 23 March 2025 13:31:37 +0000 (0:00:01.804) 0:10:05.301 ********** 2025-03-23 13:36:28.707174 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.707179 | orchestrator | 2025-03-23 13:36:28.707184 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-03-23 13:36:28.707188 | orchestrator | Sunday 23 March 2025 13:31:37 +0000 (0:00:00.825) 0:10:06.126 ********** 2025-03-23 13:36:28.707193 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707198 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707203 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707207 | orchestrator | 2025-03-23 13:36:28.707212 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-03-23 13:36:28.707219 | orchestrator | Sunday 23 March 2025 13:31:39 +0000 (0:00:01.302) 0:10:07.428 ********** 2025-03-23 13:36:28.707224 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707229 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707234 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707238 | orchestrator | 2025-03-23 13:36:28.707243 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-03-23 13:36:28.707251 | orchestrator | Sunday 23 March 2025 13:31:41 +0000 (0:00:01.758) 0:10:09.187 ********** 2025-03-23 13:36:28.707256 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707260 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707265 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707270 | orchestrator | 2025-03-23 13:36:28.707275 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-03-23 13:36:28.707279 | orchestrator | Sunday 23 March 2025 13:31:43 +0000 (0:00:02.011) 0:10:11.199 ********** 2025-03-23 13:36:28.707284 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707289 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707294 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707299 | orchestrator | 2025-03-23 13:36:28.707303 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-03-23 13:36:28.707308 | orchestrator | Sunday 23 March 2025 13:31:43 +0000 (0:00:00.421) 0:10:11.621 ********** 2025-03-23 13:36:28.707313 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707318 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707322 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707330 | orchestrator | 2025-03-23 13:36:28.707335 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-03-23 13:36:28.707340 | orchestrator | Sunday 23 March 2025 13:31:44 +0000 (0:00:00.734) 0:10:12.356 ********** 2025-03-23 13:36:28.707345 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-23 13:36:28.707350 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-03-23 13:36:28.707355 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-03-23 13:36:28.707359 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-03-23 13:36:28.707364 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-03-23 13:36:28.707369 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-03-23 13:36:28.707374 | orchestrator | 2025-03-23 13:36:28.707378 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-03-23 13:36:28.707383 | orchestrator | Sunday 23 March 2025 13:31:45 +0000 (0:00:01.259) 0:10:13.616 ********** 2025-03-23 13:36:28.707388 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-03-23 13:36:28.707393 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-03-23 13:36:28.707398 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-03-23 13:36:28.707402 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-03-23 13:36:28.707407 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-03-23 13:36:28.707412 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-03-23 13:36:28.707417 | orchestrator | 2025-03-23 13:36:28.707421 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-03-23 13:36:28.707437 | orchestrator | Sunday 23 March 2025 13:31:49 +0000 (0:00:03.796) 0:10:17.412 ********** 2025-03-23 13:36:28.707443 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707448 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707453 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.707457 | orchestrator | 2025-03-23 13:36:28.707462 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-03-23 13:36:28.707467 | orchestrator | Sunday 23 March 2025 13:31:52 +0000 (0:00:02.990) 0:10:20.403 ********** 2025-03-23 13:36:28.707471 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707476 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707481 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-03-23 13:36:28.707486 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.707491 | orchestrator | 2025-03-23 13:36:28.707496 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-03-23 13:36:28.707501 | orchestrator | Sunday 23 March 2025 13:32:04 +0000 (0:00:12.702) 0:10:33.105 ********** 2025-03-23 13:36:28.707505 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707510 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707518 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707522 | orchestrator | 2025-03-23 13:36:28.707527 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-03-23 13:36:28.707532 | orchestrator | Sunday 23 March 2025 13:32:05 +0000 (0:00:00.654) 0:10:33.759 ********** 2025-03-23 13:36:28.707537 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707542 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707546 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707551 | orchestrator | 2025-03-23 13:36:28.707556 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.707561 | orchestrator | Sunday 23 March 2025 13:32:06 +0000 (0:00:01.345) 0:10:35.105 ********** 2025-03-23 13:36:28.707566 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707570 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707575 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707580 | orchestrator | 2025-03-23 13:36:28.707585 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-03-23 13:36:28.707590 | orchestrator | Sunday 23 March 2025 13:32:07 +0000 (0:00:00.752) 0:10:35.858 ********** 2025-03-23 13:36:28.707594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.707599 | orchestrator | 2025-03-23 13:36:28.707604 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-03-23 13:36:28.707609 | orchestrator | Sunday 23 March 2025 13:32:08 +0000 (0:00:01.041) 0:10:36.899 ********** 2025-03-23 13:36:28.707623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.707628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.707633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.707637 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707642 | orchestrator | 2025-03-23 13:36:28.707647 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-03-23 13:36:28.707652 | orchestrator | Sunday 23 March 2025 13:32:09 +0000 (0:00:00.557) 0:10:37.457 ********** 2025-03-23 13:36:28.707657 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707661 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707666 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707671 | orchestrator | 2025-03-23 13:36:28.707676 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-03-23 13:36:28.707680 | orchestrator | Sunday 23 March 2025 13:32:09 +0000 (0:00:00.473) 0:10:37.930 ********** 2025-03-23 13:36:28.707685 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707690 | orchestrator | 2025-03-23 13:36:28.707695 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-03-23 13:36:28.707699 | orchestrator | Sunday 23 March 2025 13:32:10 +0000 (0:00:00.293) 0:10:38.224 ********** 2025-03-23 13:36:28.707704 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707709 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707714 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707718 | orchestrator | 2025-03-23 13:36:28.707723 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-03-23 13:36:28.707731 | orchestrator | Sunday 23 March 2025 13:32:10 +0000 (0:00:00.786) 0:10:39.010 ********** 2025-03-23 13:36:28.707737 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707741 | orchestrator | 2025-03-23 13:36:28.707746 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-03-23 13:36:28.707751 | orchestrator | Sunday 23 March 2025 13:32:11 +0000 (0:00:00.507) 0:10:39.518 ********** 2025-03-23 13:36:28.707756 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707761 | orchestrator | 2025-03-23 13:36:28.707765 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-03-23 13:36:28.707770 | orchestrator | Sunday 23 March 2025 13:32:11 +0000 (0:00:00.288) 0:10:39.806 ********** 2025-03-23 13:36:28.707778 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707782 | orchestrator | 2025-03-23 13:36:28.707787 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-03-23 13:36:28.707792 | orchestrator | Sunday 23 March 2025 13:32:11 +0000 (0:00:00.174) 0:10:39.980 ********** 2025-03-23 13:36:28.707797 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707801 | orchestrator | 2025-03-23 13:36:28.707806 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-03-23 13:36:28.707811 | orchestrator | Sunday 23 March 2025 13:32:12 +0000 (0:00:00.290) 0:10:40.271 ********** 2025-03-23 13:36:28.707816 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707820 | orchestrator | 2025-03-23 13:36:28.707825 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-03-23 13:36:28.707830 | orchestrator | Sunday 23 March 2025 13:32:12 +0000 (0:00:00.277) 0:10:40.548 ********** 2025-03-23 13:36:28.707835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.707851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.707857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.707862 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707867 | orchestrator | 2025-03-23 13:36:28.707872 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-03-23 13:36:28.707877 | orchestrator | Sunday 23 March 2025 13:32:12 +0000 (0:00:00.424) 0:10:40.973 ********** 2025-03-23 13:36:28.707881 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707888 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.707893 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.707898 | orchestrator | 2025-03-23 13:36:28.707903 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-03-23 13:36:28.707908 | orchestrator | Sunday 23 March 2025 13:32:13 +0000 (0:00:00.352) 0:10:41.326 ********** 2025-03-23 13:36:28.707912 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707917 | orchestrator | 2025-03-23 13:36:28.707922 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-03-23 13:36:28.707927 | orchestrator | Sunday 23 March 2025 13:32:14 +0000 (0:00:01.019) 0:10:42.345 ********** 2025-03-23 13:36:28.707932 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.707936 | orchestrator | 2025-03-23 13:36:28.707941 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.707946 | orchestrator | Sunday 23 March 2025 13:32:14 +0000 (0:00:00.388) 0:10:42.733 ********** 2025-03-23 13:36:28.707951 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.707956 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.707960 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.707965 | orchestrator | 2025-03-23 13:36:28.707970 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-03-23 13:36:28.707975 | orchestrator | 2025-03-23 13:36:28.707979 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.707984 | orchestrator | Sunday 23 March 2025 13:32:17 +0000 (0:00:03.402) 0:10:46.136 ********** 2025-03-23 13:36:28.707989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.707995 | orchestrator | 2025-03-23 13:36:28.707999 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.708004 | orchestrator | Sunday 23 March 2025 13:32:19 +0000 (0:00:01.568) 0:10:47.704 ********** 2025-03-23 13:36:28.708009 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708014 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708019 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708024 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708028 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708033 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708041 | orchestrator | 2025-03-23 13:36:28.708046 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.708051 | orchestrator | Sunday 23 March 2025 13:32:20 +0000 (0:00:00.844) 0:10:48.548 ********** 2025-03-23 13:36:28.708056 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708060 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708065 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708070 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708075 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708080 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708085 | orchestrator | 2025-03-23 13:36:28.708089 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.708094 | orchestrator | Sunday 23 March 2025 13:32:21 +0000 (0:00:01.385) 0:10:49.934 ********** 2025-03-23 13:36:28.708099 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708104 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708108 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708113 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708118 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708123 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708128 | orchestrator | 2025-03-23 13:36:28.708132 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.708137 | orchestrator | Sunday 23 March 2025 13:32:23 +0000 (0:00:01.332) 0:10:51.266 ********** 2025-03-23 13:36:28.708142 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708147 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708152 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708157 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708162 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708166 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708171 | orchestrator | 2025-03-23 13:36:28.708176 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.708183 | orchestrator | Sunday 23 March 2025 13:32:24 +0000 (0:00:01.107) 0:10:52.374 ********** 2025-03-23 13:36:28.708188 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708193 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708198 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708203 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708207 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708212 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708217 | orchestrator | 2025-03-23 13:36:28.708222 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.708226 | orchestrator | Sunday 23 March 2025 13:32:25 +0000 (0:00:01.117) 0:10:53.491 ********** 2025-03-23 13:36:28.708231 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708236 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708241 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708246 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708250 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708255 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708260 | orchestrator | 2025-03-23 13:36:28.708265 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.708270 | orchestrator | Sunday 23 March 2025 13:32:26 +0000 (0:00:00.720) 0:10:54.212 ********** 2025-03-23 13:36:28.708275 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708279 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708284 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708289 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708305 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708311 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708316 | orchestrator | 2025-03-23 13:36:28.708320 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.708325 | orchestrator | Sunday 23 March 2025 13:32:26 +0000 (0:00:00.928) 0:10:55.140 ********** 2025-03-23 13:36:28.708330 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708338 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708345 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708350 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708355 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708359 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708364 | orchestrator | 2025-03-23 13:36:28.708369 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.708374 | orchestrator | Sunday 23 March 2025 13:32:27 +0000 (0:00:00.683) 0:10:55.824 ********** 2025-03-23 13:36:28.708378 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708383 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708388 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708393 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708398 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708403 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708407 | orchestrator | 2025-03-23 13:36:28.708412 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.708417 | orchestrator | Sunday 23 March 2025 13:32:28 +0000 (0:00:00.978) 0:10:56.802 ********** 2025-03-23 13:36:28.708422 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708426 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708431 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708436 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708441 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708446 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708451 | orchestrator | 2025-03-23 13:36:28.708456 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.708460 | orchestrator | Sunday 23 March 2025 13:32:29 +0000 (0:00:00.679) 0:10:57.482 ********** 2025-03-23 13:36:28.708465 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708470 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708475 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708479 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708484 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708489 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708494 | orchestrator | 2025-03-23 13:36:28.708499 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.708503 | orchestrator | Sunday 23 March 2025 13:32:30 +0000 (0:00:01.331) 0:10:58.813 ********** 2025-03-23 13:36:28.708508 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708513 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708518 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708523 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708528 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708533 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708537 | orchestrator | 2025-03-23 13:36:28.708542 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.708547 | orchestrator | Sunday 23 March 2025 13:32:31 +0000 (0:00:00.745) 0:10:59.559 ********** 2025-03-23 13:36:28.708552 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708556 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708561 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708566 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708571 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708576 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708581 | orchestrator | 2025-03-23 13:36:28.708586 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.708590 | orchestrator | Sunday 23 March 2025 13:32:32 +0000 (0:00:01.134) 0:11:00.694 ********** 2025-03-23 13:36:28.708595 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708600 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708605 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708610 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708628 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708633 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708638 | orchestrator | 2025-03-23 13:36:28.708643 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.708647 | orchestrator | Sunday 23 March 2025 13:32:33 +0000 (0:00:00.793) 0:11:01.487 ********** 2025-03-23 13:36:28.708652 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708657 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708662 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708667 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708672 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708676 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708681 | orchestrator | 2025-03-23 13:36:28.708686 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.708691 | orchestrator | Sunday 23 March 2025 13:32:34 +0000 (0:00:01.137) 0:11:02.625 ********** 2025-03-23 13:36:28.708696 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708700 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708706 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708711 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708715 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708720 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708728 | orchestrator | 2025-03-23 13:36:28.708733 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.708738 | orchestrator | Sunday 23 March 2025 13:32:35 +0000 (0:00:00.753) 0:11:03.378 ********** 2025-03-23 13:36:28.708743 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708748 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708752 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708757 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708762 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708767 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708772 | orchestrator | 2025-03-23 13:36:28.708777 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.708781 | orchestrator | Sunday 23 March 2025 13:32:36 +0000 (0:00:00.995) 0:11:04.373 ********** 2025-03-23 13:36:28.708786 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708802 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708808 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708814 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708819 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708823 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708828 | orchestrator | 2025-03-23 13:36:28.708833 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.708838 | orchestrator | Sunday 23 March 2025 13:32:36 +0000 (0:00:00.717) 0:11:05.090 ********** 2025-03-23 13:36:28.708843 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708848 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708852 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708857 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708862 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708867 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708872 | orchestrator | 2025-03-23 13:36:28.708877 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.708882 | orchestrator | Sunday 23 March 2025 13:32:38 +0000 (0:00:01.116) 0:11:06.207 ********** 2025-03-23 13:36:28.708886 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.708891 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.708896 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.708901 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.708905 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.708910 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.708915 | orchestrator | 2025-03-23 13:36:28.708920 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.708930 | orchestrator | Sunday 23 March 2025 13:32:38 +0000 (0:00:00.756) 0:11:06.964 ********** 2025-03-23 13:36:28.708935 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708940 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708945 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708949 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708954 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.708959 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.708964 | orchestrator | 2025-03-23 13:36:28.708969 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.708973 | orchestrator | Sunday 23 March 2025 13:32:40 +0000 (0:00:01.261) 0:11:08.225 ********** 2025-03-23 13:36:28.708978 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.708983 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.708988 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.708993 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.708997 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709002 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709007 | orchestrator | 2025-03-23 13:36:28.709012 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.709017 | orchestrator | Sunday 23 March 2025 13:32:41 +0000 (0:00:00.957) 0:11:09.182 ********** 2025-03-23 13:36:28.709021 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709026 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709031 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709036 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709041 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709045 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709050 | orchestrator | 2025-03-23 13:36:28.709055 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.709060 | orchestrator | Sunday 23 March 2025 13:32:42 +0000 (0:00:01.148) 0:11:10.331 ********** 2025-03-23 13:36:28.709064 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709069 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709074 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709079 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709084 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709089 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709093 | orchestrator | 2025-03-23 13:36:28.709098 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.709103 | orchestrator | Sunday 23 March 2025 13:32:42 +0000 (0:00:00.810) 0:11:11.142 ********** 2025-03-23 13:36:28.709108 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709113 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709120 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709125 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709130 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709135 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709139 | orchestrator | 2025-03-23 13:36:28.709144 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.709149 | orchestrator | Sunday 23 March 2025 13:32:44 +0000 (0:00:01.058) 0:11:12.200 ********** 2025-03-23 13:36:28.709154 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709158 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709163 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709168 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709173 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709177 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709182 | orchestrator | 2025-03-23 13:36:28.709187 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.709192 | orchestrator | Sunday 23 March 2025 13:32:44 +0000 (0:00:00.765) 0:11:12.966 ********** 2025-03-23 13:36:28.709197 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709204 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709209 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709214 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709219 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709224 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709228 | orchestrator | 2025-03-23 13:36:28.709233 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.709238 | orchestrator | Sunday 23 March 2025 13:32:45 +0000 (0:00:01.103) 0:11:14.069 ********** 2025-03-23 13:36:28.709243 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709248 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709252 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709257 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709264 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709269 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709274 | orchestrator | 2025-03-23 13:36:28.709290 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.709296 | orchestrator | Sunday 23 March 2025 13:32:46 +0000 (0:00:00.862) 0:11:14.932 ********** 2025-03-23 13:36:28.709301 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709306 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709311 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709316 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709321 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709326 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709330 | orchestrator | 2025-03-23 13:36:28.709335 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.709340 | orchestrator | Sunday 23 March 2025 13:32:47 +0000 (0:00:01.033) 0:11:15.966 ********** 2025-03-23 13:36:28.709345 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709350 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709354 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709359 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709364 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709369 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709373 | orchestrator | 2025-03-23 13:36:28.709378 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.709383 | orchestrator | Sunday 23 March 2025 13:32:48 +0000 (0:00:00.748) 0:11:16.714 ********** 2025-03-23 13:36:28.709388 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709392 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709397 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709402 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709407 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709412 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709417 | orchestrator | 2025-03-23 13:36:28.709421 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.709426 | orchestrator | Sunday 23 March 2025 13:32:49 +0000 (0:00:01.003) 0:11:17.718 ********** 2025-03-23 13:36:28.709431 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709436 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709440 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709445 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709450 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709455 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709460 | orchestrator | 2025-03-23 13:36:28.709464 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.709469 | orchestrator | Sunday 23 March 2025 13:32:50 +0000 (0:00:00.727) 0:11:18.446 ********** 2025-03-23 13:36:28.709474 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.709479 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-23 13:36:28.709487 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709492 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.709496 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-23 13:36:28.709501 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709506 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.709511 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-23 13:36:28.709516 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709521 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.709525 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.709530 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709535 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.709540 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.709545 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709553 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.709558 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.709563 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709568 | orchestrator | 2025-03-23 13:36:28.709573 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.709577 | orchestrator | Sunday 23 March 2025 13:32:51 +0000 (0:00:01.078) 0:11:19.524 ********** 2025-03-23 13:36:28.709582 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-23 13:36:28.709589 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-23 13:36:28.709594 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709599 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-23 13:36:28.709603 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-23 13:36:28.709608 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709634 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-23 13:36:28.709640 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-23 13:36:28.709645 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709650 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-23 13:36:28.709654 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-23 13:36:28.709659 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-23 13:36:28.709664 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-23 13:36:28.709669 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709673 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709678 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-23 13:36:28.709683 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-23 13:36:28.709688 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709693 | orchestrator | 2025-03-23 13:36:28.709698 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.709702 | orchestrator | Sunday 23 March 2025 13:32:52 +0000 (0:00:00.850) 0:11:20.374 ********** 2025-03-23 13:36:28.709707 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709712 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709717 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709722 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709739 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709745 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709750 | orchestrator | 2025-03-23 13:36:28.709755 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.709760 | orchestrator | Sunday 23 March 2025 13:32:53 +0000 (0:00:01.040) 0:11:21.414 ********** 2025-03-23 13:36:28.709764 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709769 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709774 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709782 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709787 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709792 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709796 | orchestrator | 2025-03-23 13:36:28.709801 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.709806 | orchestrator | Sunday 23 March 2025 13:32:53 +0000 (0:00:00.731) 0:11:22.146 ********** 2025-03-23 13:36:28.709811 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709816 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709820 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709825 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709830 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709835 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709839 | orchestrator | 2025-03-23 13:36:28.709844 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.709849 | orchestrator | Sunday 23 March 2025 13:32:55 +0000 (0:00:01.025) 0:11:23.172 ********** 2025-03-23 13:36:28.709854 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709859 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709863 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709868 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709873 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709878 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709882 | orchestrator | 2025-03-23 13:36:28.709887 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.709892 | orchestrator | Sunday 23 March 2025 13:32:55 +0000 (0:00:00.696) 0:11:23.868 ********** 2025-03-23 13:36:28.709897 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709902 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709906 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709911 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709916 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709921 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709926 | orchestrator | 2025-03-23 13:36:28.709933 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.709938 | orchestrator | Sunday 23 March 2025 13:32:56 +0000 (0:00:00.970) 0:11:24.838 ********** 2025-03-23 13:36:28.709942 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.709947 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.709952 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.709956 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.709961 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.709966 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.709971 | orchestrator | 2025-03-23 13:36:28.709976 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.709980 | orchestrator | Sunday 23 March 2025 13:32:57 +0000 (0:00:00.720) 0:11:25.559 ********** 2025-03-23 13:36:28.709985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.709990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.709995 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.709999 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710004 | orchestrator | 2025-03-23 13:36:28.710009 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.710027 | orchestrator | Sunday 23 March 2025 13:32:57 +0000 (0:00:00.468) 0:11:26.028 ********** 2025-03-23 13:36:28.710033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.710038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.710042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.710047 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710052 | orchestrator | 2025-03-23 13:36:28.710060 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.710065 | orchestrator | Sunday 23 March 2025 13:32:58 +0000 (0:00:00.746) 0:11:26.774 ********** 2025-03-23 13:36:28.710070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.710074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.710079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.710084 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710089 | orchestrator | 2025-03-23 13:36:28.710094 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.710099 | orchestrator | Sunday 23 March 2025 13:32:59 +0000 (0:00:01.097) 0:11:27.872 ********** 2025-03-23 13:36:28.710103 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710108 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710113 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710118 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710122 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710130 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710134 | orchestrator | 2025-03-23 13:36:28.710139 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.710144 | orchestrator | Sunday 23 March 2025 13:33:00 +0000 (0:00:00.690) 0:11:28.563 ********** 2025-03-23 13:36:28.710149 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.710154 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710159 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.710163 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710168 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.710173 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710190 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.710196 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710201 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.710206 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710210 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.710215 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710220 | orchestrator | 2025-03-23 13:36:28.710225 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.710230 | orchestrator | Sunday 23 March 2025 13:33:01 +0000 (0:00:01.481) 0:11:30.044 ********** 2025-03-23 13:36:28.710234 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710239 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710244 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710249 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710254 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710258 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710263 | orchestrator | 2025-03-23 13:36:28.710268 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.710273 | orchestrator | Sunday 23 March 2025 13:33:02 +0000 (0:00:00.732) 0:11:30.776 ********** 2025-03-23 13:36:28.710277 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710282 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710287 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710292 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710296 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710301 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710306 | orchestrator | 2025-03-23 13:36:28.710311 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.710316 | orchestrator | Sunday 23 March 2025 13:33:03 +0000 (0:00:00.994) 0:11:31.771 ********** 2025-03-23 13:36:28.710320 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-23 13:36:28.710325 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710330 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-23 13:36:28.710339 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710344 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-23 13:36:28.710349 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710354 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.710359 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710363 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.710368 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710373 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.710378 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710383 | orchestrator | 2025-03-23 13:36:28.710387 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.710392 | orchestrator | Sunday 23 March 2025 13:33:04 +0000 (0:00:01.156) 0:11:32.928 ********** 2025-03-23 13:36:28.710397 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710402 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710407 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710411 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.710416 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710421 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.710426 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710431 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.710436 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710441 | orchestrator | 2025-03-23 13:36:28.710445 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.710450 | orchestrator | Sunday 23 March 2025 13:33:05 +0000 (0:00:01.100) 0:11:34.028 ********** 2025-03-23 13:36:28.710455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-23 13:36:28.710460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-23 13:36:28.710465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-23 13:36:28.710469 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710474 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-23 13:36:28.710479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-23 13:36:28.710484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-23 13:36:28.710488 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710493 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-23 13:36:28.710498 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-23 13:36:28.710503 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-23 13:36:28.710508 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.710517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.710522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.710527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.710531 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.710536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.710541 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710546 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.710557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.710562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.710570 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710577 | orchestrator | 2025-03-23 13:36:28.710582 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.710587 | orchestrator | Sunday 23 March 2025 13:33:07 +0000 (0:00:01.623) 0:11:35.651 ********** 2025-03-23 13:36:28.710592 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710597 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710601 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710606 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710611 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710626 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710631 | orchestrator | 2025-03-23 13:36:28.710636 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.710640 | orchestrator | Sunday 23 March 2025 13:33:09 +0000 (0:00:01.562) 0:11:37.214 ********** 2025-03-23 13:36:28.710645 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710650 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710655 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710660 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.710664 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710669 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.710674 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710679 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.710683 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710688 | orchestrator | 2025-03-23 13:36:28.710693 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.710698 | orchestrator | Sunday 23 March 2025 13:33:10 +0000 (0:00:01.806) 0:11:39.020 ********** 2025-03-23 13:36:28.710702 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710707 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710712 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710720 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710725 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710729 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710734 | orchestrator | 2025-03-23 13:36:28.710739 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.710744 | orchestrator | Sunday 23 March 2025 13:33:12 +0000 (0:00:01.516) 0:11:40.536 ********** 2025-03-23 13:36:28.710748 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:36:28.710753 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:36:28.710758 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:36:28.710763 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.710767 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.710772 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.710777 | orchestrator | 2025-03-23 13:36:28.710782 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-03-23 13:36:28.710786 | orchestrator | Sunday 23 March 2025 13:33:13 +0000 (0:00:01.529) 0:11:42.066 ********** 2025-03-23 13:36:28.710791 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.710796 | orchestrator | 2025-03-23 13:36:28.710812 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-03-23 13:36:28.710817 | orchestrator | Sunday 23 March 2025 13:33:17 +0000 (0:00:03.512) 0:11:45.579 ********** 2025-03-23 13:36:28.710822 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.710827 | orchestrator | 2025-03-23 13:36:28.710832 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-03-23 13:36:28.710837 | orchestrator | Sunday 23 March 2025 13:33:19 +0000 (0:00:01.865) 0:11:47.444 ********** 2025-03-23 13:36:28.710841 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.710846 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.710851 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.710856 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.710860 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.710868 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.710873 | orchestrator | 2025-03-23 13:36:28.710878 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-03-23 13:36:28.710882 | orchestrator | Sunday 23 March 2025 13:33:21 +0000 (0:00:02.377) 0:11:49.822 ********** 2025-03-23 13:36:28.710887 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.710892 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.710897 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.710901 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.710906 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.710911 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.710915 | orchestrator | 2025-03-23 13:36:28.710920 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-03-23 13:36:28.710925 | orchestrator | Sunday 23 March 2025 13:33:22 +0000 (0:00:01.193) 0:11:51.015 ********** 2025-03-23 13:36:28.710930 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.710935 | orchestrator | 2025-03-23 13:36:28.710940 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-03-23 13:36:28.710945 | orchestrator | Sunday 23 March 2025 13:33:24 +0000 (0:00:01.637) 0:11:52.653 ********** 2025-03-23 13:36:28.710950 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.710954 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.710959 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.710964 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.710969 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.710973 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.710978 | orchestrator | 2025-03-23 13:36:28.710983 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-03-23 13:36:28.710988 | orchestrator | Sunday 23 March 2025 13:33:26 +0000 (0:00:02.459) 0:11:55.112 ********** 2025-03-23 13:36:28.710992 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.710997 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.711002 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.711006 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.711011 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.711016 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.711021 | orchestrator | 2025-03-23 13:36:28.711026 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-03-23 13:36:28.711033 | orchestrator | Sunday 23 March 2025 13:33:31 +0000 (0:00:04.862) 0:11:59.975 ********** 2025-03-23 13:36:28.711039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.711044 | orchestrator | 2025-03-23 13:36:28.711048 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-03-23 13:36:28.711053 | orchestrator | Sunday 23 March 2025 13:33:33 +0000 (0:00:01.536) 0:12:01.511 ********** 2025-03-23 13:36:28.711058 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.711063 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.711068 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.711072 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711077 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711082 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711087 | orchestrator | 2025-03-23 13:36:28.711092 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-03-23 13:36:28.711096 | orchestrator | Sunday 23 March 2025 13:33:34 +0000 (0:00:00.839) 0:12:02.350 ********** 2025-03-23 13:36:28.711101 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:36:28.711106 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:36:28.711111 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:36:28.711115 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.711120 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.711128 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.711132 | orchestrator | 2025-03-23 13:36:28.711137 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-03-23 13:36:28.711142 | orchestrator | Sunday 23 March 2025 13:33:37 +0000 (0:00:03.054) 0:12:05.405 ********** 2025-03-23 13:36:28.711147 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:36:28.711152 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:36:28.711156 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:36:28.711164 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711168 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711173 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711178 | orchestrator | 2025-03-23 13:36:28.711183 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-03-23 13:36:28.711187 | orchestrator | 2025-03-23 13:36:28.711192 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.711197 | orchestrator | Sunday 23 March 2025 13:33:41 +0000 (0:00:04.011) 0:12:09.416 ********** 2025-03-23 13:36:28.711202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.711209 | orchestrator | 2025-03-23 13:36:28.711214 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.711219 | orchestrator | Sunday 23 March 2025 13:33:42 +0000 (0:00:01.112) 0:12:10.528 ********** 2025-03-23 13:36:28.711224 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711228 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711233 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711238 | orchestrator | 2025-03-23 13:36:28.711243 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.711247 | orchestrator | Sunday 23 March 2025 13:33:42 +0000 (0:00:00.358) 0:12:10.887 ********** 2025-03-23 13:36:28.711252 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711257 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711262 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711266 | orchestrator | 2025-03-23 13:36:28.711271 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.711276 | orchestrator | Sunday 23 March 2025 13:33:43 +0000 (0:00:00.796) 0:12:11.683 ********** 2025-03-23 13:36:28.711281 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711286 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711290 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711295 | orchestrator | 2025-03-23 13:36:28.711300 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.711305 | orchestrator | Sunday 23 March 2025 13:33:44 +0000 (0:00:01.203) 0:12:12.887 ********** 2025-03-23 13:36:28.711309 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711314 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711319 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711324 | orchestrator | 2025-03-23 13:36:28.711332 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.711337 | orchestrator | Sunday 23 March 2025 13:33:45 +0000 (0:00:00.860) 0:12:13.748 ********** 2025-03-23 13:36:28.711342 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711346 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711351 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711356 | orchestrator | 2025-03-23 13:36:28.711361 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.711365 | orchestrator | Sunday 23 March 2025 13:33:46 +0000 (0:00:00.427) 0:12:14.176 ********** 2025-03-23 13:36:28.711370 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711375 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711380 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711384 | orchestrator | 2025-03-23 13:36:28.711389 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.711394 | orchestrator | Sunday 23 March 2025 13:33:46 +0000 (0:00:00.494) 0:12:14.671 ********** 2025-03-23 13:36:28.711402 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711407 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711412 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711417 | orchestrator | 2025-03-23 13:36:28.711421 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.711426 | orchestrator | Sunday 23 March 2025 13:33:47 +0000 (0:00:00.920) 0:12:15.591 ********** 2025-03-23 13:36:28.711431 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711436 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711440 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711445 | orchestrator | 2025-03-23 13:36:28.711450 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.711455 | orchestrator | Sunday 23 March 2025 13:33:47 +0000 (0:00:00.424) 0:12:16.016 ********** 2025-03-23 13:36:28.711460 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711466 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711471 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711476 | orchestrator | 2025-03-23 13:36:28.711481 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.711485 | orchestrator | Sunday 23 March 2025 13:33:48 +0000 (0:00:00.455) 0:12:16.471 ********** 2025-03-23 13:36:28.711490 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711495 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711500 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711505 | orchestrator | 2025-03-23 13:36:28.711509 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.711514 | orchestrator | Sunday 23 March 2025 13:33:48 +0000 (0:00:00.462) 0:12:16.934 ********** 2025-03-23 13:36:28.711519 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711524 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711529 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711533 | orchestrator | 2025-03-23 13:36:28.711538 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.711543 | orchestrator | Sunday 23 March 2025 13:33:49 +0000 (0:00:01.117) 0:12:18.051 ********** 2025-03-23 13:36:28.711548 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711552 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711557 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711562 | orchestrator | 2025-03-23 13:36:28.711567 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.711571 | orchestrator | Sunday 23 March 2025 13:33:50 +0000 (0:00:00.374) 0:12:18.425 ********** 2025-03-23 13:36:28.711576 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711581 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711586 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711590 | orchestrator | 2025-03-23 13:36:28.711595 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.711600 | orchestrator | Sunday 23 March 2025 13:33:50 +0000 (0:00:00.371) 0:12:18.797 ********** 2025-03-23 13:36:28.711605 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711610 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711638 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711643 | orchestrator | 2025-03-23 13:36:28.711648 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.711653 | orchestrator | Sunday 23 March 2025 13:33:51 +0000 (0:00:00.478) 0:12:19.276 ********** 2025-03-23 13:36:28.711658 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711663 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711668 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711672 | orchestrator | 2025-03-23 13:36:28.711677 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.711682 | orchestrator | Sunday 23 March 2025 13:33:51 +0000 (0:00:00.728) 0:12:20.004 ********** 2025-03-23 13:36:28.711690 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711695 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711700 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711707 | orchestrator | 2025-03-23 13:36:28.711712 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.711717 | orchestrator | Sunday 23 March 2025 13:33:52 +0000 (0:00:00.470) 0:12:20.475 ********** 2025-03-23 13:36:28.711722 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711726 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711731 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711736 | orchestrator | 2025-03-23 13:36:28.711741 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.711746 | orchestrator | Sunday 23 March 2025 13:33:52 +0000 (0:00:00.449) 0:12:20.925 ********** 2025-03-23 13:36:28.711750 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711755 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711760 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711765 | orchestrator | 2025-03-23 13:36:28.711770 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.711774 | orchestrator | Sunday 23 March 2025 13:33:53 +0000 (0:00:00.435) 0:12:21.361 ********** 2025-03-23 13:36:28.711779 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711784 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711789 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711794 | orchestrator | 2025-03-23 13:36:28.711798 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.711803 | orchestrator | Sunday 23 March 2025 13:33:53 +0000 (0:00:00.530) 0:12:21.891 ********** 2025-03-23 13:36:28.711808 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.711813 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.711818 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.711822 | orchestrator | 2025-03-23 13:36:28.711829 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.711834 | orchestrator | Sunday 23 March 2025 13:33:54 +0000 (0:00:00.380) 0:12:22.272 ********** 2025-03-23 13:36:28.711839 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711844 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711849 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711853 | orchestrator | 2025-03-23 13:36:28.711858 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.711863 | orchestrator | Sunday 23 March 2025 13:33:54 +0000 (0:00:00.457) 0:12:22.730 ********** 2025-03-23 13:36:28.711868 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711872 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711877 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711882 | orchestrator | 2025-03-23 13:36:28.711887 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.711892 | orchestrator | Sunday 23 March 2025 13:33:54 +0000 (0:00:00.413) 0:12:23.143 ********** 2025-03-23 13:36:28.711896 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711901 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711906 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711910 | orchestrator | 2025-03-23 13:36:28.711915 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.711920 | orchestrator | Sunday 23 March 2025 13:33:55 +0000 (0:00:00.533) 0:12:23.677 ********** 2025-03-23 13:36:28.711925 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711929 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711936 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711941 | orchestrator | 2025-03-23 13:36:28.711946 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.711951 | orchestrator | Sunday 23 March 2025 13:33:55 +0000 (0:00:00.317) 0:12:23.994 ********** 2025-03-23 13:36:28.711956 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711960 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711968 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.711973 | orchestrator | 2025-03-23 13:36:28.711978 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.711983 | orchestrator | Sunday 23 March 2025 13:33:56 +0000 (0:00:00.368) 0:12:24.363 ********** 2025-03-23 13:36:28.711987 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.711992 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.711997 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712002 | orchestrator | 2025-03-23 13:36:28.712006 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.712011 | orchestrator | Sunday 23 March 2025 13:33:56 +0000 (0:00:00.395) 0:12:24.759 ********** 2025-03-23 13:36:28.712016 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712021 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712026 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712030 | orchestrator | 2025-03-23 13:36:28.712035 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.712040 | orchestrator | Sunday 23 March 2025 13:33:57 +0000 (0:00:00.614) 0:12:25.374 ********** 2025-03-23 13:36:28.712045 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712050 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712055 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712059 | orchestrator | 2025-03-23 13:36:28.712064 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.712069 | orchestrator | Sunday 23 March 2025 13:33:57 +0000 (0:00:00.415) 0:12:25.789 ********** 2025-03-23 13:36:28.712074 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712079 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712084 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712088 | orchestrator | 2025-03-23 13:36:28.712093 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.712098 | orchestrator | Sunday 23 March 2025 13:33:57 +0000 (0:00:00.309) 0:12:26.098 ********** 2025-03-23 13:36:28.712103 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712108 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712112 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712117 | orchestrator | 2025-03-23 13:36:28.712122 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.712127 | orchestrator | Sunday 23 March 2025 13:33:58 +0000 (0:00:00.305) 0:12:26.404 ********** 2025-03-23 13:36:28.712132 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712136 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712141 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712146 | orchestrator | 2025-03-23 13:36:28.712151 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.712156 | orchestrator | Sunday 23 March 2025 13:33:58 +0000 (0:00:00.552) 0:12:26.956 ********** 2025-03-23 13:36:28.712160 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712165 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712170 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712175 | orchestrator | 2025-03-23 13:36:28.712180 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.712184 | orchestrator | Sunday 23 March 2025 13:33:59 +0000 (0:00:00.357) 0:12:27.314 ********** 2025-03-23 13:36:28.712189 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.712194 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.712199 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712203 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.712208 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.712213 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712218 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.712228 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.712233 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712240 | orchestrator | 2025-03-23 13:36:28.712245 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.712250 | orchestrator | Sunday 23 March 2025 13:33:59 +0000 (0:00:00.432) 0:12:27.746 ********** 2025-03-23 13:36:28.712255 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-23 13:36:28.712260 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-23 13:36:28.712264 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712269 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-23 13:36:28.712274 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-23 13:36:28.712279 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712284 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-23 13:36:28.712288 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-23 13:36:28.712293 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712298 | orchestrator | 2025-03-23 13:36:28.712303 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.712307 | orchestrator | Sunday 23 March 2025 13:33:59 +0000 (0:00:00.366) 0:12:28.112 ********** 2025-03-23 13:36:28.712312 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712317 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712322 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712326 | orchestrator | 2025-03-23 13:36:28.712331 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.712336 | orchestrator | Sunday 23 March 2025 13:34:00 +0000 (0:00:00.549) 0:12:28.662 ********** 2025-03-23 13:36:28.712341 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712347 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712352 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712357 | orchestrator | 2025-03-23 13:36:28.712362 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.712367 | orchestrator | Sunday 23 March 2025 13:34:00 +0000 (0:00:00.391) 0:12:29.054 ********** 2025-03-23 13:36:28.712371 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712376 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712381 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712385 | orchestrator | 2025-03-23 13:36:28.712390 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.712395 | orchestrator | Sunday 23 March 2025 13:34:01 +0000 (0:00:00.418) 0:12:29.472 ********** 2025-03-23 13:36:28.712400 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712405 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712409 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712414 | orchestrator | 2025-03-23 13:36:28.712419 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.712426 | orchestrator | Sunday 23 March 2025 13:34:01 +0000 (0:00:00.437) 0:12:29.909 ********** 2025-03-23 13:36:28.712431 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712436 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712440 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712445 | orchestrator | 2025-03-23 13:36:28.712450 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.712455 | orchestrator | Sunday 23 March 2025 13:34:02 +0000 (0:00:00.687) 0:12:30.597 ********** 2025-03-23 13:36:28.712460 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712464 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712469 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712474 | orchestrator | 2025-03-23 13:36:28.712479 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.712487 | orchestrator | Sunday 23 March 2025 13:34:02 +0000 (0:00:00.382) 0:12:30.979 ********** 2025-03-23 13:36:28.712492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.712497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.712502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.712506 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712511 | orchestrator | 2025-03-23 13:36:28.712516 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.712521 | orchestrator | Sunday 23 March 2025 13:34:03 +0000 (0:00:00.464) 0:12:31.444 ********** 2025-03-23 13:36:28.712525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.712530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.712535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.712540 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712545 | orchestrator | 2025-03-23 13:36:28.712549 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.712554 | orchestrator | Sunday 23 March 2025 13:34:03 +0000 (0:00:00.569) 0:12:32.013 ********** 2025-03-23 13:36:28.712559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.712564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.712568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.712573 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712578 | orchestrator | 2025-03-23 13:36:28.712583 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.712587 | orchestrator | Sunday 23 March 2025 13:34:04 +0000 (0:00:00.531) 0:12:32.544 ********** 2025-03-23 13:36:28.712592 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712597 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712602 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712606 | orchestrator | 2025-03-23 13:36:28.712611 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.712624 | orchestrator | Sunday 23 March 2025 13:34:04 +0000 (0:00:00.437) 0:12:32.982 ********** 2025-03-23 13:36:28.712629 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.712633 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712638 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.712643 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712648 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.712652 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712657 | orchestrator | 2025-03-23 13:36:28.712662 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.712667 | orchestrator | Sunday 23 March 2025 13:34:05 +0000 (0:00:00.937) 0:12:33.919 ********** 2025-03-23 13:36:28.712672 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712676 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712681 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712686 | orchestrator | 2025-03-23 13:36:28.712691 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.712695 | orchestrator | Sunday 23 March 2025 13:34:06 +0000 (0:00:00.399) 0:12:34.319 ********** 2025-03-23 13:36:28.712700 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712705 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712710 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712714 | orchestrator | 2025-03-23 13:36:28.712719 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.712724 | orchestrator | Sunday 23 March 2025 13:34:06 +0000 (0:00:00.418) 0:12:34.737 ********** 2025-03-23 13:36:28.712729 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.712734 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712742 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.712746 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712751 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.712756 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712761 | orchestrator | 2025-03-23 13:36:28.712767 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.712772 | orchestrator | Sunday 23 March 2025 13:34:07 +0000 (0:00:00.669) 0:12:35.407 ********** 2025-03-23 13:36:28.712777 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.712782 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712787 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.712792 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712797 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.712801 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712806 | orchestrator | 2025-03-23 13:36:28.712811 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.712816 | orchestrator | Sunday 23 March 2025 13:34:07 +0000 (0:00:00.745) 0:12:36.153 ********** 2025-03-23 13:36:28.712820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.712825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.712830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.712835 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.712844 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.712849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.712854 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.712863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.712868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.712873 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712878 | orchestrator | 2025-03-23 13:36:28.712883 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.712888 | orchestrator | Sunday 23 March 2025 13:34:08 +0000 (0:00:00.739) 0:12:36.892 ********** 2025-03-23 13:36:28.712892 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712897 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712902 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712907 | orchestrator | 2025-03-23 13:36:28.712911 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.712916 | orchestrator | Sunday 23 March 2025 13:34:09 +0000 (0:00:01.058) 0:12:37.951 ********** 2025-03-23 13:36:28.712921 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.712926 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.712930 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712935 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712940 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.712945 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712949 | orchestrator | 2025-03-23 13:36:28.712954 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.712959 | orchestrator | Sunday 23 March 2025 13:34:10 +0000 (0:00:00.706) 0:12:38.657 ********** 2025-03-23 13:36:28.712964 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.712969 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.712977 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.712982 | orchestrator | 2025-03-23 13:36:28.712987 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.712991 | orchestrator | Sunday 23 March 2025 13:34:11 +0000 (0:00:01.086) 0:12:39.744 ********** 2025-03-23 13:36:28.712996 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713001 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713006 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713010 | orchestrator | 2025-03-23 13:36:28.713015 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-03-23 13:36:28.713022 | orchestrator | Sunday 23 March 2025 13:34:12 +0000 (0:00:00.903) 0:12:40.648 ********** 2025-03-23 13:36:28.713027 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713032 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713036 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-03-23 13:36:28.713041 | orchestrator | 2025-03-23 13:36:28.713046 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-03-23 13:36:28.713051 | orchestrator | Sunday 23 March 2025 13:34:13 +0000 (0:00:00.876) 0:12:41.524 ********** 2025-03-23 13:36:28.713056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.713060 | orchestrator | 2025-03-23 13:36:28.713065 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-03-23 13:36:28.713070 | orchestrator | Sunday 23 March 2025 13:34:15 +0000 (0:00:01.914) 0:12:43.439 ********** 2025-03-23 13:36:28.713075 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-03-23 13:36:28.713081 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713086 | orchestrator | 2025-03-23 13:36:28.713091 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-03-23 13:36:28.713096 | orchestrator | Sunday 23 March 2025 13:34:15 +0000 (0:00:00.442) 0:12:43.881 ********** 2025-03-23 13:36:28.713102 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:36:28.713109 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:36:28.713114 | orchestrator | 2025-03-23 13:36:28.713119 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-03-23 13:36:28.713123 | orchestrator | Sunday 23 March 2025 13:34:22 +0000 (0:00:06.745) 0:12:50.627 ********** 2025-03-23 13:36:28.713128 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:36:28.713133 | orchestrator | 2025-03-23 13:36:28.713138 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-03-23 13:36:28.713142 | orchestrator | Sunday 23 March 2025 13:34:25 +0000 (0:00:03.214) 0:12:53.842 ********** 2025-03-23 13:36:28.713147 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.713152 | orchestrator | 2025-03-23 13:36:28.713157 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-03-23 13:36:28.713162 | orchestrator | Sunday 23 March 2025 13:34:26 +0000 (0:00:01.123) 0:12:54.965 ********** 2025-03-23 13:36:28.713166 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-23 13:36:28.713171 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-23 13:36:28.713176 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-23 13:36:28.713183 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-03-23 13:36:28.713188 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-03-23 13:36:28.713193 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-03-23 13:36:28.713198 | orchestrator | 2025-03-23 13:36:28.713203 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-03-23 13:36:28.713207 | orchestrator | Sunday 23 March 2025 13:34:28 +0000 (0:00:01.355) 0:12:56.321 ********** 2025-03-23 13:36:28.713212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:36:28.713217 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.713222 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-23 13:36:28.713227 | orchestrator | 2025-03-23 13:36:28.713231 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-03-23 13:36:28.713236 | orchestrator | Sunday 23 March 2025 13:34:30 +0000 (0:00:01.914) 0:12:58.236 ********** 2025-03-23 13:36:28.713241 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:36:28.713246 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.713250 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713255 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:36:28.713260 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.713265 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713270 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:36:28.713274 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.713279 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713284 | orchestrator | 2025-03-23 13:36:28.713289 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-03-23 13:36:28.713294 | orchestrator | Sunday 23 March 2025 13:34:31 +0000 (0:00:01.564) 0:12:59.801 ********** 2025-03-23 13:36:28.713298 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713303 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713308 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713313 | orchestrator | 2025-03-23 13:36:28.713318 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-03-23 13:36:28.713322 | orchestrator | Sunday 23 March 2025 13:34:32 +0000 (0:00:00.467) 0:13:00.268 ********** 2025-03-23 13:36:28.713327 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.713332 | orchestrator | 2025-03-23 13:36:28.713337 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-03-23 13:36:28.713342 | orchestrator | Sunday 23 March 2025 13:34:32 +0000 (0:00:00.712) 0:13:00.981 ********** 2025-03-23 13:36:28.713346 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.713351 | orchestrator | 2025-03-23 13:36:28.713356 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-03-23 13:36:28.713361 | orchestrator | Sunday 23 March 2025 13:34:33 +0000 (0:00:00.907) 0:13:01.889 ********** 2025-03-23 13:36:28.713365 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713370 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713375 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713380 | orchestrator | 2025-03-23 13:36:28.713385 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-03-23 13:36:28.713389 | orchestrator | Sunday 23 March 2025 13:34:35 +0000 (0:00:01.583) 0:13:03.472 ********** 2025-03-23 13:36:28.713394 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713399 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713404 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713408 | orchestrator | 2025-03-23 13:36:28.713417 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-03-23 13:36:28.713424 | orchestrator | Sunday 23 March 2025 13:34:36 +0000 (0:00:01.268) 0:13:04.740 ********** 2025-03-23 13:36:28.713431 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713436 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713441 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713445 | orchestrator | 2025-03-23 13:36:28.713450 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-03-23 13:36:28.713458 | orchestrator | Sunday 23 March 2025 13:34:38 +0000 (0:00:02.103) 0:13:06.843 ********** 2025-03-23 13:36:28.713463 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713468 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713472 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713477 | orchestrator | 2025-03-23 13:36:28.713482 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-03-23 13:36:28.713487 | orchestrator | Sunday 23 March 2025 13:34:40 +0000 (0:00:02.166) 0:13:09.010 ********** 2025-03-23 13:36:28.713492 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-03-23 13:36:28.713496 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-03-23 13:36:28.713501 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-03-23 13:36:28.713506 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713511 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713516 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713520 | orchestrator | 2025-03-23 13:36:28.713525 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.713530 | orchestrator | Sunday 23 March 2025 13:34:58 +0000 (0:00:17.318) 0:13:26.329 ********** 2025-03-23 13:36:28.713535 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713540 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713544 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713549 | orchestrator | 2025-03-23 13:36:28.713554 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-23 13:36:28.713559 | orchestrator | Sunday 23 March 2025 13:34:59 +0000 (0:00:00.834) 0:13:27.164 ********** 2025-03-23 13:36:28.713564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.713569 | orchestrator | 2025-03-23 13:36:28.713574 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-03-23 13:36:28.713578 | orchestrator | Sunday 23 March 2025 13:34:59 +0000 (0:00:00.936) 0:13:28.100 ********** 2025-03-23 13:36:28.713583 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713588 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713593 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713598 | orchestrator | 2025-03-23 13:36:28.713603 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-03-23 13:36:28.713607 | orchestrator | Sunday 23 March 2025 13:35:00 +0000 (0:00:00.327) 0:13:28.428 ********** 2025-03-23 13:36:28.713612 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713625 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713630 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713635 | orchestrator | 2025-03-23 13:36:28.713640 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-03-23 13:36:28.713644 | orchestrator | Sunday 23 March 2025 13:35:01 +0000 (0:00:01.207) 0:13:29.635 ********** 2025-03-23 13:36:28.713649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.713654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.713659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.713664 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713669 | orchestrator | 2025-03-23 13:36:28.713673 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-03-23 13:36:28.713678 | orchestrator | Sunday 23 March 2025 13:35:02 +0000 (0:00:01.051) 0:13:30.687 ********** 2025-03-23 13:36:28.713686 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713691 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713695 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713700 | orchestrator | 2025-03-23 13:36:28.713705 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.713710 | orchestrator | Sunday 23 March 2025 13:35:02 +0000 (0:00:00.314) 0:13:31.001 ********** 2025-03-23 13:36:28.713715 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.713720 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.713725 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.713729 | orchestrator | 2025-03-23 13:36:28.713734 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-03-23 13:36:28.713739 | orchestrator | 2025-03-23 13:36:28.713744 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-23 13:36:28.713749 | orchestrator | Sunday 23 March 2025 13:35:04 +0000 (0:00:01.971) 0:13:32.973 ********** 2025-03-23 13:36:28.713753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.713761 | orchestrator | 2025-03-23 13:36:28.713765 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-23 13:36:28.713770 | orchestrator | Sunday 23 March 2025 13:35:05 +0000 (0:00:00.688) 0:13:33.661 ********** 2025-03-23 13:36:28.713775 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713780 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713785 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713790 | orchestrator | 2025-03-23 13:36:28.713795 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-23 13:36:28.713799 | orchestrator | Sunday 23 March 2025 13:35:05 +0000 (0:00:00.325) 0:13:33.987 ********** 2025-03-23 13:36:28.713804 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713809 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713816 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713821 | orchestrator | 2025-03-23 13:36:28.713826 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-23 13:36:28.713831 | orchestrator | Sunday 23 March 2025 13:35:06 +0000 (0:00:00.703) 0:13:34.691 ********** 2025-03-23 13:36:28.713835 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713840 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713849 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713854 | orchestrator | 2025-03-23 13:36:28.713859 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-23 13:36:28.713864 | orchestrator | Sunday 23 March 2025 13:35:07 +0000 (0:00:00.844) 0:13:35.535 ********** 2025-03-23 13:36:28.713869 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.713874 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.713878 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.713883 | orchestrator | 2025-03-23 13:36:28.713888 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-23 13:36:28.713893 | orchestrator | Sunday 23 March 2025 13:35:08 +0000 (0:00:00.730) 0:13:36.265 ********** 2025-03-23 13:36:28.713898 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713902 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713907 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713912 | orchestrator | 2025-03-23 13:36:28.713917 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-23 13:36:28.713922 | orchestrator | Sunday 23 March 2025 13:35:08 +0000 (0:00:00.360) 0:13:36.626 ********** 2025-03-23 13:36:28.713926 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713931 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713936 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713941 | orchestrator | 2025-03-23 13:36:28.713946 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-23 13:36:28.713951 | orchestrator | Sunday 23 March 2025 13:35:08 +0000 (0:00:00.359) 0:13:36.986 ********** 2025-03-23 13:36:28.713958 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713963 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713968 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.713973 | orchestrator | 2025-03-23 13:36:28.713977 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-23 13:36:28.713982 | orchestrator | Sunday 23 March 2025 13:35:09 +0000 (0:00:00.694) 0:13:37.680 ********** 2025-03-23 13:36:28.713987 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.713992 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.713997 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714001 | orchestrator | 2025-03-23 13:36:28.714006 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-23 13:36:28.714011 | orchestrator | Sunday 23 March 2025 13:35:09 +0000 (0:00:00.345) 0:13:38.025 ********** 2025-03-23 13:36:28.714037 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714042 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714047 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714052 | orchestrator | 2025-03-23 13:36:28.714057 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-23 13:36:28.714062 | orchestrator | Sunday 23 March 2025 13:35:10 +0000 (0:00:00.372) 0:13:38.398 ********** 2025-03-23 13:36:28.714067 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714072 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714076 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714081 | orchestrator | 2025-03-23 13:36:28.714086 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-23 13:36:28.714091 | orchestrator | Sunday 23 March 2025 13:35:10 +0000 (0:00:00.372) 0:13:38.771 ********** 2025-03-23 13:36:28.714096 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.714100 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.714105 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.714110 | orchestrator | 2025-03-23 13:36:28.714115 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-23 13:36:28.714119 | orchestrator | Sunday 23 March 2025 13:35:11 +0000 (0:00:01.144) 0:13:39.915 ********** 2025-03-23 13:36:28.714124 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714129 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714134 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714138 | orchestrator | 2025-03-23 13:36:28.714143 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-23 13:36:28.714148 | orchestrator | Sunday 23 March 2025 13:35:12 +0000 (0:00:00.390) 0:13:40.306 ********** 2025-03-23 13:36:28.714153 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714158 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714163 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714167 | orchestrator | 2025-03-23 13:36:28.714172 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-23 13:36:28.714177 | orchestrator | Sunday 23 March 2025 13:35:12 +0000 (0:00:00.393) 0:13:40.700 ********** 2025-03-23 13:36:28.714182 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.714187 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.714191 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.714196 | orchestrator | 2025-03-23 13:36:28.714201 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-23 13:36:28.714206 | orchestrator | Sunday 23 March 2025 13:35:12 +0000 (0:00:00.411) 0:13:41.111 ********** 2025-03-23 13:36:28.714210 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.714215 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.714220 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.714225 | orchestrator | 2025-03-23 13:36:28.714229 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-23 13:36:28.714234 | orchestrator | Sunday 23 March 2025 13:35:13 +0000 (0:00:00.685) 0:13:41.797 ********** 2025-03-23 13:36:28.714242 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.714247 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.714251 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.714256 | orchestrator | 2025-03-23 13:36:28.714261 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-23 13:36:28.714266 | orchestrator | Sunday 23 March 2025 13:35:14 +0000 (0:00:00.426) 0:13:42.223 ********** 2025-03-23 13:36:28.714271 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714275 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714280 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714285 | orchestrator | 2025-03-23 13:36:28.714290 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-23 13:36:28.714295 | orchestrator | Sunday 23 March 2025 13:35:14 +0000 (0:00:00.448) 0:13:42.672 ********** 2025-03-23 13:36:28.714299 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714304 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714309 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714314 | orchestrator | 2025-03-23 13:36:28.714321 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-23 13:36:28.714326 | orchestrator | Sunday 23 March 2025 13:35:14 +0000 (0:00:00.382) 0:13:43.055 ********** 2025-03-23 13:36:28.714331 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714336 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714343 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714348 | orchestrator | 2025-03-23 13:36:28.714355 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-23 13:36:28.714360 | orchestrator | Sunday 23 March 2025 13:35:15 +0000 (0:00:00.747) 0:13:43.803 ********** 2025-03-23 13:36:28.714364 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.714369 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.714374 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.714379 | orchestrator | 2025-03-23 13:36:28.714383 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-23 13:36:28.714388 | orchestrator | Sunday 23 March 2025 13:35:16 +0000 (0:00:00.413) 0:13:44.216 ********** 2025-03-23 13:36:28.714393 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714398 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714403 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714407 | orchestrator | 2025-03-23 13:36:28.714412 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-23 13:36:28.714417 | orchestrator | Sunday 23 March 2025 13:35:16 +0000 (0:00:00.378) 0:13:44.595 ********** 2025-03-23 13:36:28.714422 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714427 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714432 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714436 | orchestrator | 2025-03-23 13:36:28.714441 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-23 13:36:28.714446 | orchestrator | Sunday 23 March 2025 13:35:16 +0000 (0:00:00.362) 0:13:44.957 ********** 2025-03-23 13:36:28.714451 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714455 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714460 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714465 | orchestrator | 2025-03-23 13:36:28.714470 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-23 13:36:28.714477 | orchestrator | Sunday 23 March 2025 13:35:17 +0000 (0:00:00.718) 0:13:45.676 ********** 2025-03-23 13:36:28.714482 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714487 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714492 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714497 | orchestrator | 2025-03-23 13:36:28.714502 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-23 13:36:28.714506 | orchestrator | Sunday 23 March 2025 13:35:17 +0000 (0:00:00.399) 0:13:46.076 ********** 2025-03-23 13:36:28.714511 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714516 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714523 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714528 | orchestrator | 2025-03-23 13:36:28.714533 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-23 13:36:28.714538 | orchestrator | Sunday 23 March 2025 13:35:18 +0000 (0:00:00.399) 0:13:46.475 ********** 2025-03-23 13:36:28.714542 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714547 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714552 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714557 | orchestrator | 2025-03-23 13:36:28.714562 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-23 13:36:28.714566 | orchestrator | Sunday 23 March 2025 13:35:18 +0000 (0:00:00.317) 0:13:46.792 ********** 2025-03-23 13:36:28.714571 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714576 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714581 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714586 | orchestrator | 2025-03-23 13:36:28.714591 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-23 13:36:28.714596 | orchestrator | Sunday 23 March 2025 13:35:19 +0000 (0:00:00.724) 0:13:47.517 ********** 2025-03-23 13:36:28.714600 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714605 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714610 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714636 | orchestrator | 2025-03-23 13:36:28.714641 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-23 13:36:28.714646 | orchestrator | Sunday 23 March 2025 13:35:19 +0000 (0:00:00.440) 0:13:47.958 ********** 2025-03-23 13:36:28.714651 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714656 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714661 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714666 | orchestrator | 2025-03-23 13:36:28.714671 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-23 13:36:28.714676 | orchestrator | Sunday 23 March 2025 13:35:20 +0000 (0:00:00.391) 0:13:48.350 ********** 2025-03-23 13:36:28.714681 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714686 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714690 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714695 | orchestrator | 2025-03-23 13:36:28.714700 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-23 13:36:28.714705 | orchestrator | Sunday 23 March 2025 13:35:20 +0000 (0:00:00.421) 0:13:48.771 ********** 2025-03-23 13:36:28.714710 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714715 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714720 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714725 | orchestrator | 2025-03-23 13:36:28.714729 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-23 13:36:28.714734 | orchestrator | Sunday 23 March 2025 13:35:21 +0000 (0:00:00.660) 0:13:49.432 ********** 2025-03-23 13:36:28.714739 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714744 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714749 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714754 | orchestrator | 2025-03-23 13:36:28.714759 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-23 13:36:28.714766 | orchestrator | Sunday 23 March 2025 13:35:21 +0000 (0:00:00.392) 0:13:49.825 ********** 2025-03-23 13:36:28.714771 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.714776 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-23 13:36:28.714781 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714786 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.714790 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-23 13:36:28.714795 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714800 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.714808 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-23 13:36:28.714813 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714818 | orchestrator | 2025-03-23 13:36:28.714823 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-23 13:36:28.714828 | orchestrator | Sunday 23 March 2025 13:35:22 +0000 (0:00:00.395) 0:13:50.220 ********** 2025-03-23 13:36:28.714833 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-23 13:36:28.714840 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-23 13:36:28.714844 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714849 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-23 13:36:28.714854 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-23 13:36:28.714859 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714864 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-23 13:36:28.714869 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-23 13:36:28.714874 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714878 | orchestrator | 2025-03-23 13:36:28.714883 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-23 13:36:28.714888 | orchestrator | Sunday 23 March 2025 13:35:22 +0000 (0:00:00.395) 0:13:50.616 ********** 2025-03-23 13:36:28.714893 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714898 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714902 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714907 | orchestrator | 2025-03-23 13:36:28.714912 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-23 13:36:28.714917 | orchestrator | Sunday 23 March 2025 13:35:23 +0000 (0:00:00.673) 0:13:51.290 ********** 2025-03-23 13:36:28.714922 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714926 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714935 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714940 | orchestrator | 2025-03-23 13:36:28.714945 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:36:28.714949 | orchestrator | Sunday 23 March 2025 13:35:23 +0000 (0:00:00.356) 0:13:51.647 ********** 2025-03-23 13:36:28.714954 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714959 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714964 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714969 | orchestrator | 2025-03-23 13:36:28.714974 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:36:28.714979 | orchestrator | Sunday 23 March 2025 13:35:23 +0000 (0:00:00.384) 0:13:52.032 ********** 2025-03-23 13:36:28.714983 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.714988 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.714993 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.714998 | orchestrator | 2025-03-23 13:36:28.715003 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:36:28.715008 | orchestrator | Sunday 23 March 2025 13:35:24 +0000 (0:00:00.405) 0:13:52.437 ********** 2025-03-23 13:36:28.715013 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715017 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715022 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715027 | orchestrator | 2025-03-23 13:36:28.715032 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:36:28.715037 | orchestrator | Sunday 23 March 2025 13:35:25 +0000 (0:00:00.779) 0:13:53.217 ********** 2025-03-23 13:36:28.715042 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715046 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715051 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715056 | orchestrator | 2025-03-23 13:36:28.715061 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:36:28.715069 | orchestrator | Sunday 23 March 2025 13:35:25 +0000 (0:00:00.378) 0:13:53.595 ********** 2025-03-23 13:36:28.715074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.715078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.715083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.715088 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715093 | orchestrator | 2025-03-23 13:36:28.715097 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:36:28.715102 | orchestrator | Sunday 23 March 2025 13:35:25 +0000 (0:00:00.508) 0:13:54.103 ********** 2025-03-23 13:36:28.715107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.715112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.715116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.715121 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715126 | orchestrator | 2025-03-23 13:36:28.715131 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:36:28.715136 | orchestrator | Sunday 23 March 2025 13:35:26 +0000 (0:00:00.496) 0:13:54.600 ********** 2025-03-23 13:36:28.715140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.715145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.715150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.715155 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715159 | orchestrator | 2025-03-23 13:36:28.715164 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.715171 | orchestrator | Sunday 23 March 2025 13:35:26 +0000 (0:00:00.491) 0:13:55.092 ********** 2025-03-23 13:36:28.715176 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715181 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715185 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715190 | orchestrator | 2025-03-23 13:36:28.715195 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:36:28.715200 | orchestrator | Sunday 23 March 2025 13:35:27 +0000 (0:00:00.375) 0:13:55.467 ********** 2025-03-23 13:36:28.715205 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.715209 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715214 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.715219 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715224 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.715228 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715233 | orchestrator | 2025-03-23 13:36:28.715238 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:36:28.715243 | orchestrator | Sunday 23 March 2025 13:35:28 +0000 (0:00:00.901) 0:13:56.368 ********** 2025-03-23 13:36:28.715247 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715252 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715257 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715262 | orchestrator | 2025-03-23 13:36:28.715267 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:36:28.715271 | orchestrator | Sunday 23 March 2025 13:35:28 +0000 (0:00:00.359) 0:13:56.728 ********** 2025-03-23 13:36:28.715276 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715281 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715286 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715290 | orchestrator | 2025-03-23 13:36:28.715295 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:36:28.715300 | orchestrator | Sunday 23 March 2025 13:35:28 +0000 (0:00:00.366) 0:13:57.094 ********** 2025-03-23 13:36:28.715305 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:36:28.715309 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715317 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:36:28.715322 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715327 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:36:28.715332 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715337 | orchestrator | 2025-03-23 13:36:28.715341 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:36:28.715346 | orchestrator | Sunday 23 March 2025 13:35:29 +0000 (0:00:00.531) 0:13:57.626 ********** 2025-03-23 13:36:28.715351 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.715358 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715363 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.715368 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715373 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:36:28.715378 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715382 | orchestrator | 2025-03-23 13:36:28.715387 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:36:28.715392 | orchestrator | Sunday 23 March 2025 13:35:30 +0000 (0:00:00.732) 0:13:58.358 ********** 2025-03-23 13:36:28.715397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.715402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.715406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.715411 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715416 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:36:28.715421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:36:28.715425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:36:28.715430 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715435 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:36:28.715440 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:36:28.715445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:36:28.715449 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715454 | orchestrator | 2025-03-23 13:36:28.715459 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-23 13:36:28.715464 | orchestrator | Sunday 23 March 2025 13:35:30 +0000 (0:00:00.769) 0:13:59.128 ********** 2025-03-23 13:36:28.715469 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715473 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715478 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715483 | orchestrator | 2025-03-23 13:36:28.715488 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-23 13:36:28.715493 | orchestrator | Sunday 23 March 2025 13:35:31 +0000 (0:00:00.911) 0:14:00.040 ********** 2025-03-23 13:36:28.715497 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.715502 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715507 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.715512 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715516 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.715521 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715526 | orchestrator | 2025-03-23 13:36:28.715531 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-23 13:36:28.715536 | orchestrator | Sunday 23 March 2025 13:35:32 +0000 (0:00:00.658) 0:14:00.698 ********** 2025-03-23 13:36:28.715540 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715549 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715557 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715562 | orchestrator | 2025-03-23 13:36:28.715566 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-23 13:36:28.715571 | orchestrator | Sunday 23 March 2025 13:35:33 +0000 (0:00:00.980) 0:14:01.679 ********** 2025-03-23 13:36:28.715576 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715581 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715586 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715590 | orchestrator | 2025-03-23 13:36:28.715595 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-03-23 13:36:28.715600 | orchestrator | Sunday 23 March 2025 13:35:34 +0000 (0:00:00.733) 0:14:02.412 ********** 2025-03-23 13:36:28.715605 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.715610 | orchestrator | 2025-03-23 13:36:28.715625 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-03-23 13:36:28.715630 | orchestrator | Sunday 23 March 2025 13:35:35 +0000 (0:00:00.957) 0:14:03.370 ********** 2025-03-23 13:36:28.715635 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-03-23 13:36:28.715640 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-03-23 13:36:28.715644 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-03-23 13:36:28.715649 | orchestrator | 2025-03-23 13:36:28.715654 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-03-23 13:36:28.715659 | orchestrator | Sunday 23 March 2025 13:35:35 +0000 (0:00:00.784) 0:14:04.155 ********** 2025-03-23 13:36:28.715664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:36:28.715669 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.715673 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-23 13:36:28.715678 | orchestrator | 2025-03-23 13:36:28.715683 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-03-23 13:36:28.715688 | orchestrator | Sunday 23 March 2025 13:35:38 +0000 (0:00:02.027) 0:14:06.182 ********** 2025-03-23 13:36:28.715693 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:36:28.715698 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-23 13:36:28.715703 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.715708 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:36:28.715712 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-23 13:36:28.715717 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.715722 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:36:28.715727 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-23 13:36:28.715732 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.715737 | orchestrator | 2025-03-23 13:36:28.715742 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-03-23 13:36:28.715747 | orchestrator | Sunday 23 March 2025 13:35:39 +0000 (0:00:01.675) 0:14:07.857 ********** 2025-03-23 13:36:28.715751 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715757 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715761 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715766 | orchestrator | 2025-03-23 13:36:28.715771 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-03-23 13:36:28.715776 | orchestrator | Sunday 23 March 2025 13:35:40 +0000 (0:00:00.406) 0:14:08.264 ********** 2025-03-23 13:36:28.715781 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715786 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.715790 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.715795 | orchestrator | 2025-03-23 13:36:28.715800 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-03-23 13:36:28.715805 | orchestrator | Sunday 23 March 2025 13:35:40 +0000 (0:00:00.391) 0:14:08.656 ********** 2025-03-23 13:36:28.715810 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-03-23 13:36:28.715819 | orchestrator | 2025-03-23 13:36:28.715824 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-03-23 13:36:28.715829 | orchestrator | Sunday 23 March 2025 13:35:40 +0000 (0:00:00.266) 0:14:08.923 ********** 2025-03-23 13:36:28.715834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715860 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715865 | orchestrator | 2025-03-23 13:36:28.715870 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-03-23 13:36:28.715875 | orchestrator | Sunday 23 March 2025 13:35:42 +0000 (0:00:01.326) 0:14:10.249 ********** 2025-03-23 13:36:28.715880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715909 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715913 | orchestrator | 2025-03-23 13:36:28.715918 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-03-23 13:36:28.715923 | orchestrator | Sunday 23 March 2025 13:35:42 +0000 (0:00:00.832) 0:14:11.082 ********** 2025-03-23 13:36:28.715928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-23 13:36:28.715952 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.715957 | orchestrator | 2025-03-23 13:36:28.715962 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-03-23 13:36:28.715966 | orchestrator | Sunday 23 March 2025 13:35:43 +0000 (0:00:00.632) 0:14:11.715 ********** 2025-03-23 13:36:28.715971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-23 13:36:28.715976 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-23 13:36:28.715985 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-23 13:36:28.715990 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-23 13:36:28.715995 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-23 13:36:28.716000 | orchestrator | 2025-03-23 13:36:28.716004 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-03-23 13:36:28.716009 | orchestrator | Sunday 23 March 2025 13:36:09 +0000 (0:00:26.209) 0:14:37.924 ********** 2025-03-23 13:36:28.716014 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.716019 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.716024 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.716028 | orchestrator | 2025-03-23 13:36:28.716033 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-03-23 13:36:28.716038 | orchestrator | Sunday 23 March 2025 13:36:10 +0000 (0:00:00.526) 0:14:38.450 ********** 2025-03-23 13:36:28.716043 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.716047 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.716052 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.716057 | orchestrator | 2025-03-23 13:36:28.716062 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-03-23 13:36:28.716067 | orchestrator | Sunday 23 March 2025 13:36:10 +0000 (0:00:00.370) 0:14:38.821 ********** 2025-03-23 13:36:28.716071 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.716076 | orchestrator | 2025-03-23 13:36:28.716083 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-03-23 13:36:28.716088 | orchestrator | Sunday 23 March 2025 13:36:11 +0000 (0:00:00.606) 0:14:39.427 ********** 2025-03-23 13:36:28.716093 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.716098 | orchestrator | 2025-03-23 13:36:28.716102 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-03-23 13:36:28.716107 | orchestrator | Sunday 23 March 2025 13:36:12 +0000 (0:00:00.912) 0:14:40.340 ********** 2025-03-23 13:36:28.716112 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716117 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716121 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716126 | orchestrator | 2025-03-23 13:36:28.716131 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-03-23 13:36:28.716136 | orchestrator | Sunday 23 March 2025 13:36:13 +0000 (0:00:01.380) 0:14:41.720 ********** 2025-03-23 13:36:28.716141 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716145 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716150 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716155 | orchestrator | 2025-03-23 13:36:28.716159 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-03-23 13:36:28.716166 | orchestrator | Sunday 23 March 2025 13:36:14 +0000 (0:00:01.241) 0:14:42.961 ********** 2025-03-23 13:36:28.716171 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716175 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716180 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716185 | orchestrator | 2025-03-23 13:36:28.716190 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-03-23 13:36:28.716195 | orchestrator | Sunday 23 March 2025 13:36:17 +0000 (0:00:02.380) 0:14:45.342 ********** 2025-03-23 13:36:28.716199 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.716207 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.716212 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-23 13:36:28.716217 | orchestrator | 2025-03-23 13:36:28.716222 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-03-23 13:36:28.716227 | orchestrator | Sunday 23 March 2025 13:36:19 +0000 (0:00:01.985) 0:14:47.327 ********** 2025-03-23 13:36:28.716231 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.716236 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:36:28.716241 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:36:28.716246 | orchestrator | 2025-03-23 13:36:28.716251 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-23 13:36:28.716255 | orchestrator | Sunday 23 March 2025 13:36:20 +0000 (0:00:01.363) 0:14:48.691 ********** 2025-03-23 13:36:28.716260 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716265 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716270 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716275 | orchestrator | 2025-03-23 13:36:28.716280 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-23 13:36:28.716284 | orchestrator | Sunday 23 March 2025 13:36:21 +0000 (0:00:00.729) 0:14:49.420 ********** 2025-03-23 13:36:28.716289 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:36:28.716294 | orchestrator | 2025-03-23 13:36:28.716299 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-03-23 13:36:28.716304 | orchestrator | Sunday 23 March 2025 13:36:22 +0000 (0:00:00.853) 0:14:50.274 ********** 2025-03-23 13:36:28.716308 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.716313 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.716318 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.716323 | orchestrator | 2025-03-23 13:36:28.716328 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-03-23 13:36:28.716332 | orchestrator | Sunday 23 March 2025 13:36:22 +0000 (0:00:00.362) 0:14:50.636 ********** 2025-03-23 13:36:28.716337 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716342 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716347 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716352 | orchestrator | 2025-03-23 13:36:28.716356 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-03-23 13:36:28.716361 | orchestrator | Sunday 23 March 2025 13:36:23 +0000 (0:00:01.324) 0:14:51.961 ********** 2025-03-23 13:36:28.716366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:36:28.716371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:36:28.716376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:36:28.716380 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:36:28.716385 | orchestrator | 2025-03-23 13:36:28.716390 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-03-23 13:36:28.716395 | orchestrator | Sunday 23 March 2025 13:36:24 +0000 (0:00:01.100) 0:14:53.062 ********** 2025-03-23 13:36:28.716400 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:36:28.716404 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:36:28.716409 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:36:28.716414 | orchestrator | 2025-03-23 13:36:28.716419 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-23 13:36:28.716424 | orchestrator | Sunday 23 March 2025 13:36:25 +0000 (0:00:00.353) 0:14:53.416 ********** 2025-03-23 13:36:28.716428 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:36:28.716433 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:36:28.716438 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:36:28.716443 | orchestrator | 2025-03-23 13:36:28.716448 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:36:28.716455 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-03-23 13:36:28.716461 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-03-23 13:36:28.716466 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-03-23 13:36:28.716471 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-03-23 13:36:28.716475 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-03-23 13:36:28.716480 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-03-23 13:36:28.716485 | orchestrator | 2025-03-23 13:36:28.716490 | orchestrator | 2025-03-23 13:36:28.716495 | orchestrator | 2025-03-23 13:36:28.716501 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:36:31.720147 | orchestrator | Sunday 23 March 2025 13:36:26 +0000 (0:00:01.454) 0:14:54.870 ********** 2025-03-23 13:36:31.720252 | orchestrator | =============================================================================== 2025-03-23 13:36:31.720269 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 43.03s 2025-03-23 13:36:31.720283 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 41.60s 2025-03-23 13:36:31.720327 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 26.21s 2025-03-23 13:36:31.720342 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.57s 2025-03-23 13:36:31.720356 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.32s 2025-03-23 13:36:31.720370 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.81s 2025-03-23 13:36:31.720384 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.70s 2025-03-23 13:36:31.720398 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.51s 2025-03-23 13:36:31.720412 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.46s 2025-03-23 13:36:31.720426 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 7.18s 2025-03-23 13:36:31.720440 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.88s 2025-03-23 13:36:31.720453 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.75s 2025-03-23 13:36:31.720467 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.57s 2025-03-23 13:36:31.720481 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.32s 2025-03-23 13:36:31.720495 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.08s 2025-03-23 13:36:31.720508 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.86s 2025-03-23 13:36:31.720522 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.75s 2025-03-23 13:36:31.720536 | orchestrator | ceph-handler : set _crash_handler_called after restart ------------------ 4.01s 2025-03-23 13:36:31.720550 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.80s 2025-03-23 13:36:31.720563 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 3.59s 2025-03-23 13:36:31.720578 | orchestrator | 2025-03-23 13:36:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:31.720607 | orchestrator | 2025-03-23 13:36:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:31.721434 | orchestrator | 2025-03-23 13:36:31 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:31.722737 | orchestrator | 2025-03-23 13:36:31 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:34.770444 | orchestrator | 2025-03-23 13:36:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:34.770560 | orchestrator | 2025-03-23 13:36:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:34.771349 | orchestrator | 2025-03-23 13:36:34 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:34.772715 | orchestrator | 2025-03-23 13:36:34 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:37.814555 | orchestrator | 2025-03-23 13:36:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:37.814734 | orchestrator | 2025-03-23 13:36:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:37.816681 | orchestrator | 2025-03-23 13:36:37 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:37.818580 | orchestrator | 2025-03-23 13:36:37 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:40.881696 | orchestrator | 2025-03-23 13:36:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:40.881817 | orchestrator | 2025-03-23 13:36:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:43.936738 | orchestrator | 2025-03-23 13:36:40 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:43.936837 | orchestrator | 2025-03-23 13:36:40 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:43.936854 | orchestrator | 2025-03-23 13:36:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:43.936885 | orchestrator | 2025-03-23 13:36:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:43.938164 | orchestrator | 2025-03-23 13:36:43 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:43.939172 | orchestrator | 2025-03-23 13:36:43 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:46.988142 | orchestrator | 2025-03-23 13:36:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:46.988262 | orchestrator | 2025-03-23 13:36:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:46.990651 | orchestrator | 2025-03-23 13:36:46 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:46.992678 | orchestrator | 2025-03-23 13:36:46 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:50.059966 | orchestrator | 2025-03-23 13:36:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:50.060083 | orchestrator | 2025-03-23 13:36:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:50.061326 | orchestrator | 2025-03-23 13:36:50 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:50.063536 | orchestrator | 2025-03-23 13:36:50 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:53.109408 | orchestrator | 2025-03-23 13:36:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:53.109506 | orchestrator | 2025-03-23 13:36:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:53.110563 | orchestrator | 2025-03-23 13:36:53 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:53.111877 | orchestrator | 2025-03-23 13:36:53 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:56.160316 | orchestrator | 2025-03-23 13:36:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:56.160439 | orchestrator | 2025-03-23 13:36:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:56.162083 | orchestrator | 2025-03-23 13:36:56 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:56.162118 | orchestrator | 2025-03-23 13:36:56 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:36:59.218419 | orchestrator | 2025-03-23 13:36:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:36:59.218555 | orchestrator | 2025-03-23 13:36:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:36:59.220207 | orchestrator | 2025-03-23 13:36:59 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:36:59.220240 | orchestrator | 2025-03-23 13:36:59 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:02.287313 | orchestrator | 2025-03-23 13:36:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:02.287431 | orchestrator | 2025-03-23 13:37:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:02.289349 | orchestrator | 2025-03-23 13:37:02 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:37:02.291737 | orchestrator | 2025-03-23 13:37:02 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:05.334257 | orchestrator | 2025-03-23 13:37:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:05.334394 | orchestrator | 2025-03-23 13:37:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:05.335028 | orchestrator | 2025-03-23 13:37:05 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:37:05.336167 | orchestrator | 2025-03-23 13:37:05 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:08.390855 | orchestrator | 2025-03-23 13:37:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:08.390975 | orchestrator | 2025-03-23 13:37:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:08.392760 | orchestrator | 2025-03-23 13:37:08 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:37:08.393809 | orchestrator | 2025-03-23 13:37:08 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:08.393954 | orchestrator | 2025-03-23 13:37:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:11.451158 | orchestrator | 2025-03-23 13:37:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:11.451528 | orchestrator | 2025-03-23 13:37:11 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:37:11.452371 | orchestrator | 2025-03-23 13:37:11 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:14.492616 | orchestrator | 2025-03-23 13:37:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:14.492779 | orchestrator | 2025-03-23 13:37:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:14.494215 | orchestrator | 2025-03-23 13:37:14 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state STARTED 2025-03-23 13:37:14.495591 | orchestrator | 2025-03-23 13:37:14 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:17.555731 | orchestrator | 2025-03-23 13:37:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:17.555848 | orchestrator | 2025-03-23 13:37:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:17.560781 | orchestrator | 2025-03-23 13:37:17 | INFO  | Task e0c07292-e685-456d-be98-9fe1599c78a3 is in state SUCCESS 2025-03-23 13:37:17.562268 | orchestrator | 2025-03-23 13:37:17.562308 | orchestrator | 2025-03-23 13:37:17.562323 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-03-23 13:37:17.562338 | orchestrator | 2025-03-23 13:37:17.562352 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-23 13:37:17.562367 | orchestrator | Sunday 23 March 2025 13:33:11 +0000 (0:00:00.226) 0:00:00.226 ********** 2025-03-23 13:37:17.562381 | orchestrator | ok: [localhost] => { 2025-03-23 13:37:17.562397 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-03-23 13:37:17.562412 | orchestrator | } 2025-03-23 13:37:17.562426 | orchestrator | 2025-03-23 13:37:17.562440 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-03-23 13:37:17.562454 | orchestrator | Sunday 23 March 2025 13:33:12 +0000 (0:00:00.067) 0:00:00.293 ********** 2025-03-23 13:37:17.562468 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-03-23 13:37:17.562484 | orchestrator | ...ignoring 2025-03-23 13:37:17.562498 | orchestrator | 2025-03-23 13:37:17.562512 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-03-23 13:37:17.562526 | orchestrator | Sunday 23 March 2025 13:33:14 +0000 (0:00:02.577) 0:00:02.871 ********** 2025-03-23 13:37:17.562540 | orchestrator | skipping: [localhost] 2025-03-23 13:37:17.562554 | orchestrator | 2025-03-23 13:37:17.562568 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-03-23 13:37:17.562582 | orchestrator | Sunday 23 March 2025 13:33:14 +0000 (0:00:00.048) 0:00:02.919 ********** 2025-03-23 13:37:17.562595 | orchestrator | ok: [localhost] 2025-03-23 13:37:17.562609 | orchestrator | 2025-03-23 13:37:17.562662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:37:17.562677 | orchestrator | 2025-03-23 13:37:17.562707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:37:17.562722 | orchestrator | Sunday 23 March 2025 13:33:14 +0000 (0:00:00.178) 0:00:03.098 ********** 2025-03-23 13:37:17.562736 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.562750 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.562764 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.562778 | orchestrator | 2025-03-23 13:37:17.562792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:37:17.562806 | orchestrator | Sunday 23 March 2025 13:33:15 +0000 (0:00:00.436) 0:00:03.534 ********** 2025-03-23 13:37:17.562820 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-03-23 13:37:17.562846 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-03-23 13:37:17.562861 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-03-23 13:37:17.562878 | orchestrator | 2025-03-23 13:37:17.562893 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-03-23 13:37:17.562909 | orchestrator | 2025-03-23 13:37:17.562925 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-03-23 13:37:17.562940 | orchestrator | Sunday 23 March 2025 13:33:15 +0000 (0:00:00.516) 0:00:04.051 ********** 2025-03-23 13:37:17.562956 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:37:17.562971 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-23 13:37:17.562986 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-23 13:37:17.563023 | orchestrator | 2025-03-23 13:37:17.563038 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-23 13:37:17.563054 | orchestrator | Sunday 23 March 2025 13:33:16 +0000 (0:00:00.753) 0:00:04.805 ********** 2025-03-23 13:37:17.563069 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:37:17.563085 | orchestrator | 2025-03-23 13:37:17.563100 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-03-23 13:37:17.563115 | orchestrator | Sunday 23 March 2025 13:33:17 +0000 (0:00:00.674) 0:00:05.479 ********** 2025-03-23 13:37:17.563147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563273 | orchestrator | 2025-03-23 13:37:17.563287 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-03-23 13:37:17.563301 | orchestrator | Sunday 23 March 2025 13:33:22 +0000 (0:00:05.142) 0:00:10.622 ********** 2025-03-23 13:37:17.563315 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.563330 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.563349 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.563363 | orchestrator | 2025-03-23 13:37:17.563377 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-03-23 13:37:17.563391 | orchestrator | Sunday 23 March 2025 13:33:23 +0000 (0:00:00.932) 0:00:11.554 ********** 2025-03-23 13:37:17.563405 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.563419 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.563433 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.563447 | orchestrator | 2025-03-23 13:37:17.563461 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-03-23 13:37:17.563474 | orchestrator | Sunday 23 March 2025 13:33:25 +0000 (0:00:02.264) 0:00:13.819 ********** 2025-03-23 13:37:17.563496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.563613 | orchestrator | 2025-03-23 13:37:17.563652 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-03-23 13:37:17.563667 | orchestrator | Sunday 23 March 2025 13:33:31 +0000 (0:00:06.196) 0:00:20.015 ********** 2025-03-23 13:37:17.563681 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.563695 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.563708 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.563836 | orchestrator | 2025-03-23 13:37:17.563854 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-03-23 13:37:17.563868 | orchestrator | Sunday 23 March 2025 13:33:32 +0000 (0:00:01.136) 0:00:21.151 ********** 2025-03-23 13:37:17.563883 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:37:17.563897 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.563911 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:37:17.563925 | orchestrator | 2025-03-23 13:37:17.563939 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-03-23 13:37:17.563953 | orchestrator | Sunday 23 March 2025 13:33:45 +0000 (0:00:12.183) 0:00:33.335 ********** 2025-03-23 13:37:17.563977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.563994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.564019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-23 13:37:17.564043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.564058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.564080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-23 13:37:17.564095 | orchestrator | 2025-03-23 13:37:17.564109 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-03-23 13:37:17.564123 | orchestrator | Sunday 23 March 2025 13:33:50 +0000 (0:00:05.560) 0:00:38.895 ********** 2025-03-23 13:37:17.564137 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.564151 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:37:17.564165 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:37:17.564179 | orchestrator | 2025-03-23 13:37:17.564193 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-03-23 13:37:17.564207 | orchestrator | Sunday 23 March 2025 13:33:51 +0000 (0:00:01.194) 0:00:40.089 ********** 2025-03-23 13:37:17.564221 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.564235 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.564249 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.564263 | orchestrator | 2025-03-23 13:37:17.564277 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-03-23 13:37:17.564290 | orchestrator | Sunday 23 March 2025 13:33:52 +0000 (0:00:00.508) 0:00:40.598 ********** 2025-03-23 13:37:17.564304 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.564318 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.564332 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.564346 | orchestrator | 2025-03-23 13:37:17.564360 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-03-23 13:37:17.564374 | orchestrator | Sunday 23 March 2025 13:33:52 +0000 (0:00:00.404) 0:00:41.003 ********** 2025-03-23 13:37:17.564389 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-03-23 13:37:17.564404 | orchestrator | ...ignoring 2025-03-23 13:37:17.564419 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-03-23 13:37:17.564433 | orchestrator | ...ignoring 2025-03-23 13:37:17.564447 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-03-23 13:37:17.564463 | orchestrator | ...ignoring 2025-03-23 13:37:17.564478 | orchestrator | 2025-03-23 13:37:17.564494 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-03-23 13:37:17.564509 | orchestrator | Sunday 23 March 2025 13:34:03 +0000 (0:00:10.958) 0:00:51.961 ********** 2025-03-23 13:37:17.564524 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.564539 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.564554 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.564570 | orchestrator | 2025-03-23 13:37:17.564586 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-03-23 13:37:17.564601 | orchestrator | Sunday 23 March 2025 13:34:04 +0000 (0:00:00.872) 0:00:52.834 ********** 2025-03-23 13:37:17.564616 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.564663 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.564679 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.564695 | orchestrator | 2025-03-23 13:37:17.564716 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-03-23 13:37:17.564732 | orchestrator | Sunday 23 March 2025 13:34:05 +0000 (0:00:00.898) 0:00:53.733 ********** 2025-03-23 13:37:17.564747 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.564763 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.564778 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.564793 | orchestrator | 2025-03-23 13:37:17.564815 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-03-23 13:37:17.564830 | orchestrator | Sunday 23 March 2025 13:34:06 +0000 (0:00:00.564) 0:00:54.298 ********** 2025-03-23 13:37:17.564843 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.564857 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.564871 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.564884 | orchestrator | 2025-03-23 13:37:17.564898 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-03-23 13:37:17.564912 | orchestrator | Sunday 23 March 2025 13:34:06 +0000 (0:00:00.706) 0:00:55.004 ********** 2025-03-23 13:37:17.564926 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.564939 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.564953 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.564967 | orchestrator | 2025-03-23 13:37:17.564981 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-03-23 13:37:17.564995 | orchestrator | Sunday 23 March 2025 13:34:07 +0000 (0:00:00.699) 0:00:55.703 ********** 2025-03-23 13:37:17.565008 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.565022 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.565036 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.565049 | orchestrator | 2025-03-23 13:37:17.565063 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-23 13:37:17.565077 | orchestrator | Sunday 23 March 2025 13:34:08 +0000 (0:00:00.626) 0:00:56.330 ********** 2025-03-23 13:37:17.565090 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.565104 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.565118 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-03-23 13:37:17.565132 | orchestrator | 2025-03-23 13:37:17.565145 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-03-23 13:37:17.565159 | orchestrator | Sunday 23 March 2025 13:34:08 +0000 (0:00:00.541) 0:00:56.872 ********** 2025-03-23 13:37:17.565172 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.565186 | orchestrator | 2025-03-23 13:37:17.565200 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-03-23 13:37:17.565214 | orchestrator | Sunday 23 March 2025 13:34:22 +0000 (0:00:13.564) 0:01:10.436 ********** 2025-03-23 13:37:17.565227 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.565241 | orchestrator | 2025-03-23 13:37:17.565254 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-23 13:37:17.565268 | orchestrator | Sunday 23 March 2025 13:34:22 +0000 (0:00:00.140) 0:01:10.576 ********** 2025-03-23 13:37:17.565282 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.565296 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.565309 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.565323 | orchestrator | 2025-03-23 13:37:17.565337 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-03-23 13:37:17.565350 | orchestrator | Sunday 23 March 2025 13:34:23 +0000 (0:00:01.200) 0:01:11.777 ********** 2025-03-23 13:37:17.565364 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.565378 | orchestrator | 2025-03-23 13:37:17.565391 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-03-23 13:37:17.565405 | orchestrator | Sunday 23 March 2025 13:34:35 +0000 (0:00:11.502) 0:01:23.279 ********** 2025-03-23 13:37:17.565426 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-03-23 13:37:17.565440 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.565453 | orchestrator | 2025-03-23 13:37:17.565467 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-03-23 13:37:17.565481 | orchestrator | Sunday 23 March 2025 13:34:42 +0000 (0:00:07.272) 0:01:30.552 ********** 2025-03-23 13:37:17.565494 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.565508 | orchestrator | 2025-03-23 13:37:17.565522 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-03-23 13:37:17.565536 | orchestrator | Sunday 23 March 2025 13:34:45 +0000 (0:00:03.026) 0:01:33.579 ********** 2025-03-23 13:37:17.565550 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.565563 | orchestrator | 2025-03-23 13:37:17.565577 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-03-23 13:37:17.565590 | orchestrator | Sunday 23 March 2025 13:34:45 +0000 (0:00:00.156) 0:01:33.736 ********** 2025-03-23 13:37:17.565604 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.565617 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.565727 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.565754 | orchestrator | 2025-03-23 13:37:17.565768 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-03-23 13:37:17.565782 | orchestrator | Sunday 23 March 2025 13:34:46 +0000 (0:00:00.574) 0:01:34.311 ********** 2025-03-23 13:37:17.565796 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.565810 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:37:17.565823 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:37:17.565837 | orchestrator | 2025-03-23 13:37:17.565851 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-03-23 13:37:17.565865 | orchestrator | Sunday 23 March 2025 13:34:46 +0000 (0:00:00.523) 0:01:34.834 ********** 2025-03-23 13:37:17.565879 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-03-23 13:37:17.565893 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.565906 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:37:17.565920 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:37:17.565934 | orchestrator | 2025-03-23 13:37:17.565952 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-03-23 13:37:17.565967 | orchestrator | skipping: no hosts matched 2025-03-23 13:37:17.565981 | orchestrator | 2025-03-23 13:37:17.565995 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-03-23 13:37:17.566008 | orchestrator | 2025-03-23 13:37:17.566055 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-23 13:37:17.566072 | orchestrator | Sunday 23 March 2025 13:35:11 +0000 (0:00:24.852) 0:01:59.686 ********** 2025-03-23 13:37:17.566086 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:37:17.566100 | orchestrator | 2025-03-23 13:37:17.566121 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-23 13:37:17.566136 | orchestrator | Sunday 23 March 2025 13:35:29 +0000 (0:00:18.110) 0:02:17.797 ********** 2025-03-23 13:37:17.566149 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.566163 | orchestrator | 2025-03-23 13:37:17.566176 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-23 13:37:17.566189 | orchestrator | Sunday 23 March 2025 13:35:50 +0000 (0:00:20.617) 0:02:38.414 ********** 2025-03-23 13:37:17.566201 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.566213 | orchestrator | 2025-03-23 13:37:17.566225 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-03-23 13:37:17.566237 | orchestrator | 2025-03-23 13:37:17.566250 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-23 13:37:17.566262 | orchestrator | Sunday 23 March 2025 13:35:53 +0000 (0:00:03.225) 0:02:41.639 ********** 2025-03-23 13:37:17.566274 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:37:17.566286 | orchestrator | 2025-03-23 13:37:17.566306 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-23 13:37:17.566318 | orchestrator | Sunday 23 March 2025 13:36:12 +0000 (0:00:19.080) 0:03:00.719 ********** 2025-03-23 13:37:17.566330 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.566342 | orchestrator | 2025-03-23 13:37:17.566355 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-23 13:37:17.566367 | orchestrator | Sunday 23 March 2025 13:36:33 +0000 (0:00:20.609) 0:03:21.329 ********** 2025-03-23 13:37:17.566379 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.566391 | orchestrator | 2025-03-23 13:37:17.566403 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-03-23 13:37:17.566415 | orchestrator | 2025-03-23 13:37:17.566428 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-23 13:37:17.566440 | orchestrator | Sunday 23 March 2025 13:36:36 +0000 (0:00:03.148) 0:03:24.477 ********** 2025-03-23 13:37:17.566452 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.566464 | orchestrator | 2025-03-23 13:37:17.566476 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-23 13:37:17.566488 | orchestrator | Sunday 23 March 2025 13:36:51 +0000 (0:00:15.409) 0:03:39.887 ********** 2025-03-23 13:37:17.566501 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.566513 | orchestrator | 2025-03-23 13:37:17.566525 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-23 13:37:17.566537 | orchestrator | Sunday 23 March 2025 13:36:56 +0000 (0:00:04.662) 0:03:44.550 ********** 2025-03-23 13:37:17.566550 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.566562 | orchestrator | 2025-03-23 13:37:17.566574 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-03-23 13:37:17.566586 | orchestrator | 2025-03-23 13:37:17.566598 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-03-23 13:37:17.566610 | orchestrator | Sunday 23 March 2025 13:36:59 +0000 (0:00:03.002) 0:03:47.552 ********** 2025-03-23 13:37:17.566639 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:37:17.566653 | orchestrator | 2025-03-23 13:37:17.566665 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-03-23 13:37:17.566677 | orchestrator | Sunday 23 March 2025 13:37:00 +0000 (0:00:00.914) 0:03:48.467 ********** 2025-03-23 13:37:17.566690 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.566702 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.566714 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.566726 | orchestrator | 2025-03-23 13:37:17.566739 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-03-23 13:37:17.566751 | orchestrator | Sunday 23 March 2025 13:37:03 +0000 (0:00:03.289) 0:03:51.756 ********** 2025-03-23 13:37:17.566763 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.566775 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.566787 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.566800 | orchestrator | 2025-03-23 13:37:17.566812 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-03-23 13:37:17.566824 | orchestrator | Sunday 23 March 2025 13:37:05 +0000 (0:00:02.456) 0:03:54.212 ********** 2025-03-23 13:37:17.566837 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.566849 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.566861 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.566873 | orchestrator | 2025-03-23 13:37:17.566890 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-03-23 13:37:17.566903 | orchestrator | Sunday 23 March 2025 13:37:08 +0000 (0:00:02.606) 0:03:56.819 ********** 2025-03-23 13:37:17.566916 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.566928 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.566941 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:37:17.566953 | orchestrator | 2025-03-23 13:37:17.566965 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-03-23 13:37:17.566986 | orchestrator | Sunday 23 March 2025 13:37:11 +0000 (0:00:02.532) 0:03:59.351 ********** 2025-03-23 13:37:17.566999 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:37:17.567011 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:37:17.567023 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:37:17.567035 | orchestrator | 2025-03-23 13:37:17.567048 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-03-23 13:37:17.567060 | orchestrator | Sunday 23 March 2025 13:37:15 +0000 (0:00:04.062) 0:04:03.414 ********** 2025-03-23 13:37:17.567072 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:37:17.567085 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:37:17.567097 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:37:17.567109 | orchestrator | 2025-03-23 13:37:17.567121 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:37:17.567134 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-23 13:37:17.567146 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-03-23 13:37:17.567166 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-03-23 13:37:17.567582 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-03-23 13:37:17.567604 | orchestrator | 2025-03-23 13:37:17.567615 | orchestrator | 2025-03-23 13:37:17.567647 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:37:17.567658 | orchestrator | Sunday 23 March 2025 13:37:15 +0000 (0:00:00.439) 0:04:03.854 ********** 2025-03-23 13:37:17.567669 | orchestrator | =============================================================================== 2025-03-23 13:37:17.567681 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.23s 2025-03-23 13:37:17.567691 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.19s 2025-03-23 13:37:17.567702 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 24.85s 2025-03-23 13:37:17.567713 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.41s 2025-03-23 13:37:17.567724 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 13.56s 2025-03-23 13:37:17.567735 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 12.18s 2025-03-23 13:37:17.567746 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 11.50s 2025-03-23 13:37:17.567757 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-03-23 13:37:17.567768 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.27s 2025-03-23 13:37:17.567779 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.37s 2025-03-23 13:37:17.567790 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.20s 2025-03-23 13:37:17.567801 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.56s 2025-03-23 13:37:17.567812 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 5.14s 2025-03-23 13:37:17.567823 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.66s 2025-03-23 13:37:17.567834 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.06s 2025-03-23 13:37:17.567844 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.29s 2025-03-23 13:37:17.567855 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.03s 2025-03-23 13:37:17.567866 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.00s 2025-03-23 13:37:17.567877 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.61s 2025-03-23 13:37:17.567896 | orchestrator | Check MariaDB service --------------------------------------------------- 2.58s 2025-03-23 13:37:17.567908 | orchestrator | 2025-03-23 13:37:17 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:17.567924 | orchestrator | 2025-03-23 13:37:17 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:20.624213 | orchestrator | 2025-03-23 13:37:17 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:20.624315 | orchestrator | 2025-03-23 13:37:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:20.624348 | orchestrator | 2025-03-23 13:37:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:20.626093 | orchestrator | 2025-03-23 13:37:20 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:20.629171 | orchestrator | 2025-03-23 13:37:20 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:23.680139 | orchestrator | 2025-03-23 13:37:20 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:23.680247 | orchestrator | 2025-03-23 13:37:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:23.680279 | orchestrator | 2025-03-23 13:37:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:23.684247 | orchestrator | 2025-03-23 13:37:23 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:23.686254 | orchestrator | 2025-03-23 13:37:23 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:23.688590 | orchestrator | 2025-03-23 13:37:23 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:26.737342 | orchestrator | 2025-03-23 13:37:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:26.737460 | orchestrator | 2025-03-23 13:37:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:26.739558 | orchestrator | 2025-03-23 13:37:26 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:26.741697 | orchestrator | 2025-03-23 13:37:26 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:26.743502 | orchestrator | 2025-03-23 13:37:26 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:29.796992 | orchestrator | 2025-03-23 13:37:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:29.797231 | orchestrator | 2025-03-23 13:37:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:29.798441 | orchestrator | 2025-03-23 13:37:29 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:29.798484 | orchestrator | 2025-03-23 13:37:29 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:29.799557 | orchestrator | 2025-03-23 13:37:29 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:29.799683 | orchestrator | 2025-03-23 13:37:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:32.845205 | orchestrator | 2025-03-23 13:37:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:32.848774 | orchestrator | 2025-03-23 13:37:32 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:32.848892 | orchestrator | 2025-03-23 13:37:32 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:32.850496 | orchestrator | 2025-03-23 13:37:32 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:32.850665 | orchestrator | 2025-03-23 13:37:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:35.904435 | orchestrator | 2025-03-23 13:37:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:35.905226 | orchestrator | 2025-03-23 13:37:35 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:35.906398 | orchestrator | 2025-03-23 13:37:35 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:35.907727 | orchestrator | 2025-03-23 13:37:35 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:38.952098 | orchestrator | 2025-03-23 13:37:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:38.952234 | orchestrator | 2025-03-23 13:37:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:38.953419 | orchestrator | 2025-03-23 13:37:38 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:38.955423 | orchestrator | 2025-03-23 13:37:38 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:38.955918 | orchestrator | 2025-03-23 13:37:38 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:42.008519 | orchestrator | 2025-03-23 13:37:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:42.008672 | orchestrator | 2025-03-23 13:37:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:42.011178 | orchestrator | 2025-03-23 13:37:42 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:42.014972 | orchestrator | 2025-03-23 13:37:42 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:42.018592 | orchestrator | 2025-03-23 13:37:42 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:42.019002 | orchestrator | 2025-03-23 13:37:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:45.066077 | orchestrator | 2025-03-23 13:37:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:45.066969 | orchestrator | 2025-03-23 13:37:45 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:45.067013 | orchestrator | 2025-03-23 13:37:45 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:45.068110 | orchestrator | 2025-03-23 13:37:45 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:48.125127 | orchestrator | 2025-03-23 13:37:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:48.125269 | orchestrator | 2025-03-23 13:37:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:51.174787 | orchestrator | 2025-03-23 13:37:48 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:51.174902 | orchestrator | 2025-03-23 13:37:48 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:51.174920 | orchestrator | 2025-03-23 13:37:48 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:51.174935 | orchestrator | 2025-03-23 13:37:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:51.174967 | orchestrator | 2025-03-23 13:37:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:51.175424 | orchestrator | 2025-03-23 13:37:51 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:51.175488 | orchestrator | 2025-03-23 13:37:51 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:51.176898 | orchestrator | 2025-03-23 13:37:51 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:54.226740 | orchestrator | 2025-03-23 13:37:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:54.226875 | orchestrator | 2025-03-23 13:37:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:54.227235 | orchestrator | 2025-03-23 13:37:54 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:54.227268 | orchestrator | 2025-03-23 13:37:54 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:54.229313 | orchestrator | 2025-03-23 13:37:54 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:37:57.268190 | orchestrator | 2025-03-23 13:37:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:37:57.268324 | orchestrator | 2025-03-23 13:37:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:37:57.269894 | orchestrator | 2025-03-23 13:37:57 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:37:57.270883 | orchestrator | 2025-03-23 13:37:57 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:37:57.271811 | orchestrator | 2025-03-23 13:37:57 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:00.322354 | orchestrator | 2025-03-23 13:37:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:00.322475 | orchestrator | 2025-03-23 13:38:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:00.323355 | orchestrator | 2025-03-23 13:38:00 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:00.324289 | orchestrator | 2025-03-23 13:38:00 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:00.325248 | orchestrator | 2025-03-23 13:38:00 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:03.370253 | orchestrator | 2025-03-23 13:38:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:03.370370 | orchestrator | 2025-03-23 13:38:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:03.370884 | orchestrator | 2025-03-23 13:38:03 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:03.374617 | orchestrator | 2025-03-23 13:38:03 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:03.375315 | orchestrator | 2025-03-23 13:38:03 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:06.436793 | orchestrator | 2025-03-23 13:38:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:06.436921 | orchestrator | 2025-03-23 13:38:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:06.437834 | orchestrator | 2025-03-23 13:38:06 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:06.439621 | orchestrator | 2025-03-23 13:38:06 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:06.440831 | orchestrator | 2025-03-23 13:38:06 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:09.503719 | orchestrator | 2025-03-23 13:38:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:09.503836 | orchestrator | 2025-03-23 13:38:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:12.572218 | orchestrator | 2025-03-23 13:38:09 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:12.572320 | orchestrator | 2025-03-23 13:38:09 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:12.572337 | orchestrator | 2025-03-23 13:38:09 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:12.572353 | orchestrator | 2025-03-23 13:38:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:12.572382 | orchestrator | 2025-03-23 13:38:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:12.576112 | orchestrator | 2025-03-23 13:38:12 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:12.577131 | orchestrator | 2025-03-23 13:38:12 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:12.577162 | orchestrator | 2025-03-23 13:38:12 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:12.577610 | orchestrator | 2025-03-23 13:38:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:15.625927 | orchestrator | 2025-03-23 13:38:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:15.626523 | orchestrator | 2025-03-23 13:38:15 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:15.628563 | orchestrator | 2025-03-23 13:38:15 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:15.629896 | orchestrator | 2025-03-23 13:38:15 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:15.630582 | orchestrator | 2025-03-23 13:38:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:18.677343 | orchestrator | 2025-03-23 13:38:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:18.679824 | orchestrator | 2025-03-23 13:38:18 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:18.681683 | orchestrator | 2025-03-23 13:38:18 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:18.683769 | orchestrator | 2025-03-23 13:38:18 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:21.737545 | orchestrator | 2025-03-23 13:38:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:21.737711 | orchestrator | 2025-03-23 13:38:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:21.738986 | orchestrator | 2025-03-23 13:38:21 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:21.740620 | orchestrator | 2025-03-23 13:38:21 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:21.742798 | orchestrator | 2025-03-23 13:38:21 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:24.796124 | orchestrator | 2025-03-23 13:38:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:24.796245 | orchestrator | 2025-03-23 13:38:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:24.798801 | orchestrator | 2025-03-23 13:38:24 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:24.799356 | orchestrator | 2025-03-23 13:38:24 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:24.800419 | orchestrator | 2025-03-23 13:38:24 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:24.800722 | orchestrator | 2025-03-23 13:38:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:27.840625 | orchestrator | 2025-03-23 13:38:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:27.841560 | orchestrator | 2025-03-23 13:38:27 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:27.846447 | orchestrator | 2025-03-23 13:38:27 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:27.848243 | orchestrator | 2025-03-23 13:38:27 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:30.909915 | orchestrator | 2025-03-23 13:38:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:30.910097 | orchestrator | 2025-03-23 13:38:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:30.913387 | orchestrator | 2025-03-23 13:38:30 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:30.914342 | orchestrator | 2025-03-23 13:38:30 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:30.915777 | orchestrator | 2025-03-23 13:38:30 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:30.915877 | orchestrator | 2025-03-23 13:38:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:33.982254 | orchestrator | 2025-03-23 13:38:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:33.983939 | orchestrator | 2025-03-23 13:38:33 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:33.985936 | orchestrator | 2025-03-23 13:38:33 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:33.987238 | orchestrator | 2025-03-23 13:38:33 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:33.987578 | orchestrator | 2025-03-23 13:38:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:37.039728 | orchestrator | 2025-03-23 13:38:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:37.041059 | orchestrator | 2025-03-23 13:38:37 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:37.045010 | orchestrator | 2025-03-23 13:38:37 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:37.047454 | orchestrator | 2025-03-23 13:38:37 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:40.093704 | orchestrator | 2025-03-23 13:38:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:40.093837 | orchestrator | 2025-03-23 13:38:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:40.094471 | orchestrator | 2025-03-23 13:38:40 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:40.095575 | orchestrator | 2025-03-23 13:38:40 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:40.096546 | orchestrator | 2025-03-23 13:38:40 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:43.138237 | orchestrator | 2025-03-23 13:38:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:43.138364 | orchestrator | 2025-03-23 13:38:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:43.142311 | orchestrator | 2025-03-23 13:38:43 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:43.145730 | orchestrator | 2025-03-23 13:38:43 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:43.152326 | orchestrator | 2025-03-23 13:38:43 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:46.202740 | orchestrator | 2025-03-23 13:38:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:46.202856 | orchestrator | 2025-03-23 13:38:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:46.204208 | orchestrator | 2025-03-23 13:38:46 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:46.206343 | orchestrator | 2025-03-23 13:38:46 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:46.208603 | orchestrator | 2025-03-23 13:38:46 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:49.265337 | orchestrator | 2025-03-23 13:38:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:49.265466 | orchestrator | 2025-03-23 13:38:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:49.265881 | orchestrator | 2025-03-23 13:38:49 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:49.265913 | orchestrator | 2025-03-23 13:38:49 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state STARTED 2025-03-23 13:38:49.266766 | orchestrator | 2025-03-23 13:38:49 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:52.331025 | orchestrator | 2025-03-23 13:38:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:52.331158 | orchestrator | 2025-03-23 13:38:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:52.333120 | orchestrator | 2025-03-23 13:38:52 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:52.335393 | orchestrator | 2025-03-23 13:38:52 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:38:52.338286 | orchestrator | 2025-03-23 13:38:52 | INFO  | Task 7462161e-0508-4492-b5dc-135e01976cfd is in state SUCCESS 2025-03-23 13:38:52.340870 | orchestrator | 2025-03-23 13:38:52.340911 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:38:52.340928 | orchestrator | 2025-03-23 13:38:52.340944 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-03-23 13:38:52.340959 | orchestrator | 2025-03-23 13:38:52.340991 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-23 13:38:52.341007 | orchestrator | Sunday 23 March 2025 13:36:32 +0000 (0:00:01.355) 0:00:01.355 ********** 2025-03-23 13:38:52.341023 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:38:52.341039 | orchestrator | 2025-03-23 13:38:52.341054 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-23 13:38:52.341068 | orchestrator | Sunday 23 March 2025 13:36:33 +0000 (0:00:00.580) 0:00:01.935 ********** 2025-03-23 13:38:52.341084 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-03-23 13:38:52.341099 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-03-23 13:38:52.341114 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-03-23 13:38:52.341129 | orchestrator | 2025-03-23 13:38:52.341144 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-23 13:38:52.341159 | orchestrator | Sunday 23 March 2025 13:36:34 +0000 (0:00:01.117) 0:00:03.052 ********** 2025-03-23 13:38:52.341549 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:38:52.341599 | orchestrator | 2025-03-23 13:38:52.341614 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-23 13:38:52.341628 | orchestrator | Sunday 23 March 2025 13:36:35 +0000 (0:00:00.811) 0:00:03.864 ********** 2025-03-23 13:38:52.341663 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.341679 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.341694 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.341708 | orchestrator | 2025-03-23 13:38:52.341722 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-23 13:38:52.341735 | orchestrator | Sunday 23 March 2025 13:36:35 +0000 (0:00:00.687) 0:00:04.552 ********** 2025-03-23 13:38:52.341749 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.341763 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.341777 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.341791 | orchestrator | 2025-03-23 13:38:52.341805 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-23 13:38:52.341819 | orchestrator | Sunday 23 March 2025 13:36:36 +0000 (0:00:00.339) 0:00:04.891 ********** 2025-03-23 13:38:52.341833 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.341847 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.341860 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.341874 | orchestrator | 2025-03-23 13:38:52.341888 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-23 13:38:52.341902 | orchestrator | Sunday 23 March 2025 13:36:37 +0000 (0:00:00.940) 0:00:05.832 ********** 2025-03-23 13:38:52.341937 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.341953 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.341968 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.341982 | orchestrator | 2025-03-23 13:38:52.341998 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-23 13:38:52.342012 | orchestrator | Sunday 23 March 2025 13:36:37 +0000 (0:00:00.353) 0:00:06.186 ********** 2025-03-23 13:38:52.342081 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.342097 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.342112 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.342127 | orchestrator | 2025-03-23 13:38:52.342141 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-23 13:38:52.342157 | orchestrator | Sunday 23 March 2025 13:36:37 +0000 (0:00:00.359) 0:00:06.545 ********** 2025-03-23 13:38:52.342173 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.342189 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.342214 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.342230 | orchestrator | 2025-03-23 13:38:52.342246 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-23 13:38:52.342262 | orchestrator | Sunday 23 March 2025 13:36:38 +0000 (0:00:00.352) 0:00:06.898 ********** 2025-03-23 13:38:52.342278 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.342294 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.342311 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.342326 | orchestrator | 2025-03-23 13:38:52.342342 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-23 13:38:52.342359 | orchestrator | Sunday 23 March 2025 13:36:38 +0000 (0:00:00.602) 0:00:07.500 ********** 2025-03-23 13:38:52.342375 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.342391 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.342407 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.342422 | orchestrator | 2025-03-23 13:38:52.342438 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-23 13:38:52.342454 | orchestrator | Sunday 23 March 2025 13:36:39 +0000 (0:00:00.485) 0:00:07.986 ********** 2025-03-23 13:38:52.342470 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-23 13:38:52.342486 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:38:52.342502 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:38:52.342528 | orchestrator | 2025-03-23 13:38:52.342542 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-23 13:38:52.342556 | orchestrator | Sunday 23 March 2025 13:36:40 +0000 (0:00:00.873) 0:00:08.860 ********** 2025-03-23 13:38:52.342570 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.342584 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.342598 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.342612 | orchestrator | 2025-03-23 13:38:52.342626 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-23 13:38:52.342663 | orchestrator | Sunday 23 March 2025 13:36:40 +0000 (0:00:00.580) 0:00:09.440 ********** 2025-03-23 13:38:52.342693 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-23 13:38:52.342708 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:38:52.342722 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:38:52.342736 | orchestrator | 2025-03-23 13:38:52.342750 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-23 13:38:52.342763 | orchestrator | Sunday 23 March 2025 13:36:43 +0000 (0:00:02.595) 0:00:12.036 ********** 2025-03-23 13:38:52.342777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:38:52.342791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:38:52.342805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:38:52.342819 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.342833 | orchestrator | 2025-03-23 13:38:52.342847 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-23 13:38:52.342861 | orchestrator | Sunday 23 March 2025 13:36:43 +0000 (0:00:00.469) 0:00:12.506 ********** 2025-03-23 13:38:52.342876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.342892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.342907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.342921 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.342935 | orchestrator | 2025-03-23 13:38:52.342949 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-23 13:38:52.342963 | orchestrator | Sunday 23 March 2025 13:36:44 +0000 (0:00:00.721) 0:00:13.228 ********** 2025-03-23 13:38:52.342978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.342994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.343008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:38:52.343030 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343044 | orchestrator | 2025-03-23 13:38:52.343058 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-23 13:38:52.343072 | orchestrator | Sunday 23 March 2025 13:36:44 +0000 (0:00:00.176) 0:00:13.404 ********** 2025-03-23 13:38:52.343088 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '7c817de588a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-23 13:36:41.977858', 'end': '2025-03-23 13:36:42.020135', 'delta': '0:00:00.042277', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7c817de588a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-03-23 13:38:52.343119 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'fc904a968a1f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-23 13:36:42.619945', 'end': '2025-03-23 13:36:42.661251', 'delta': '0:00:00.041306', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc904a968a1f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-03-23 13:38:52.343135 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '6f2d0ad80043', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-23 13:36:43.244511', 'end': '2025-03-23 13:36:43.286636', 'delta': '0:00:00.042125', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6f2d0ad80043'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-03-23 13:38:52.343150 | orchestrator | 2025-03-23 13:38:52.343164 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-23 13:38:52.343178 | orchestrator | Sunday 23 March 2025 13:36:45 +0000 (0:00:00.225) 0:00:13.630 ********** 2025-03-23 13:38:52.343192 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.343206 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.343220 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.343234 | orchestrator | 2025-03-23 13:38:52.343248 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-23 13:38:52.343262 | orchestrator | Sunday 23 March 2025 13:36:45 +0000 (0:00:00.462) 0:00:14.093 ********** 2025-03-23 13:38:52.343276 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-03-23 13:38:52.343290 | orchestrator | 2025-03-23 13:38:52.343304 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-23 13:38:52.343317 | orchestrator | Sunday 23 March 2025 13:36:46 +0000 (0:00:01.486) 0:00:15.579 ********** 2025-03-23 13:38:52.343331 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343345 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343359 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343380 | orchestrator | 2025-03-23 13:38:52.343394 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-23 13:38:52.343407 | orchestrator | Sunday 23 March 2025 13:36:47 +0000 (0:00:00.563) 0:00:16.143 ********** 2025-03-23 13:38:52.343421 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343435 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343449 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343463 | orchestrator | 2025-03-23 13:38:52.343477 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:38:52.343491 | orchestrator | Sunday 23 March 2025 13:36:48 +0000 (0:00:00.462) 0:00:16.605 ********** 2025-03-23 13:38:52.343504 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343518 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343532 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343546 | orchestrator | 2025-03-23 13:38:52.343560 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-23 13:38:52.343574 | orchestrator | Sunday 23 March 2025 13:36:48 +0000 (0:00:00.335) 0:00:16.941 ********** 2025-03-23 13:38:52.343588 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.343602 | orchestrator | 2025-03-23 13:38:52.343615 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-23 13:38:52.343630 | orchestrator | Sunday 23 March 2025 13:36:48 +0000 (0:00:00.178) 0:00:17.120 ********** 2025-03-23 13:38:52.343698 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343713 | orchestrator | 2025-03-23 13:38:52.343727 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:38:52.343741 | orchestrator | Sunday 23 March 2025 13:36:48 +0000 (0:00:00.268) 0:00:17.389 ********** 2025-03-23 13:38:52.343755 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343769 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343782 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343796 | orchestrator | 2025-03-23 13:38:52.343810 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-23 13:38:52.343824 | orchestrator | Sunday 23 March 2025 13:36:49 +0000 (0:00:00.614) 0:00:18.003 ********** 2025-03-23 13:38:52.343838 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343851 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343865 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343879 | orchestrator | 2025-03-23 13:38:52.343893 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-23 13:38:52.343907 | orchestrator | Sunday 23 March 2025 13:36:49 +0000 (0:00:00.333) 0:00:18.337 ********** 2025-03-23 13:38:52.343920 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.343934 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.343947 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.343961 | orchestrator | 2025-03-23 13:38:52.343975 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-23 13:38:52.343994 | orchestrator | Sunday 23 March 2025 13:36:50 +0000 (0:00:00.367) 0:00:18.704 ********** 2025-03-23 13:38:52.344009 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.344023 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.344043 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.344058 | orchestrator | 2025-03-23 13:38:52.344071 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-23 13:38:52.344085 | orchestrator | Sunday 23 March 2025 13:36:50 +0000 (0:00:00.393) 0:00:19.098 ********** 2025-03-23 13:38:52.344099 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.344113 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.344126 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.344140 | orchestrator | 2025-03-23 13:38:52.344154 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-23 13:38:52.344168 | orchestrator | Sunday 23 March 2025 13:36:51 +0000 (0:00:00.574) 0:00:19.673 ********** 2025-03-23 13:38:52.344182 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.344203 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.344217 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.344231 | orchestrator | 2025-03-23 13:38:52.344245 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-23 13:38:52.344258 | orchestrator | Sunday 23 March 2025 13:36:51 +0000 (0:00:00.381) 0:00:20.054 ********** 2025-03-23 13:38:52.344272 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.344286 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.344299 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.344313 | orchestrator | 2025-03-23 13:38:52.344327 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-23 13:38:52.344341 | orchestrator | Sunday 23 March 2025 13:36:51 +0000 (0:00:00.328) 0:00:20.382 ********** 2025-03-23 13:38:52.344355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8229b7a0--df8d--5815--8245--22e3d24081aa-osd--block--8229b7a0--df8d--5815--8245--22e3d24081aa', 'dm-uuid-LVM-Z48ckVyGrsEeeM12MXfzlAr80MqHOespAGApPAmB7UHP51wAby8gktMaL6KtU0Hl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ab6ed36--da2c--5faf--8aed--224e80357d25-osd--block--0ab6ed36--da2c--5faf--8aed--224e80357d25', 'dm-uuid-LVM-zYaqQ23yxO7oCX7AViyErFqgtgm1yBm7L9P0v3gzBf1hcDGQ6eBC5dhWbbHjuIKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233-osd--block--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233', 'dm-uuid-LVM-zFH3MJEtAsPE2iavoT5XYn5YfZ3YqLSGcmGfiJi1IHocp1MjXSfepodyJQ5KjueO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part1', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part14', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part15', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part16', 'scsi-SQEMU_QEMU_HARDDISK_262a0bcf-e399-49b7-b2ad-e29e608abf48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb-osd--block--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb', 'dm-uuid-LVM-V3IcR6k6ADm7uboWm9b3H9L00Lf56OvneHKsqEh9vhDPYb4hIlfo8AalFIUo9etO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8229b7a0--df8d--5815--8245--22e3d24081aa-osd--block--8229b7a0--df8d--5815--8245--22e3d24081aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-X1HqtA-mY6g-cTGC-a6FL-CEW3-JT27-9th0tU', 'scsi-0QEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754', 'scsi-SQEMU_QEMU_HARDDISK_006f5652-5a72-4e84-ab95-2470543ee754'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0ab6ed36--da2c--5faf--8aed--224e80357d25-osd--block--0ab6ed36--da2c--5faf--8aed--224e80357d25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FZYVEW-6g1r-vH9N-I6jR-a7bf-2KeT-uyr6IJ', 'scsi-0QEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b', 'scsi-SQEMU_QEMU_HARDDISK_3a31d3ab-ce8e-4019-949d-50084d47806b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992', 'scsi-SQEMU_QEMU_HARDDISK_4783091f-b49d-4bb1-a12d-8bcd2b1f8992'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344763 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.344777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9205bfbb--9f4f--501b--85a3--60f418fff160-osd--block--9205bfbb--9f4f--501b--85a3--60f418fff160', 'dm-uuid-LVM-hCtV4JXP0S36MwrQytf7UEoTu7ekt75BjonYE2XGZ5VNBQXOgGCeY733w01OQ2la'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a8506d3--5e74--5dde--8df3--17f522800900-osd--block--5a8506d3--5e74--5dde--8df3--17f522800900', 'dm-uuid-LVM-kSzZTeeRafQzJOtfEIdDvkZxGp8wDkCEcUcFC1jtiwTvO1VkDPpkRpE0XZtCm87M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part1', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part14', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part15', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part16', 'scsi-SQEMU_QEMU_HARDDISK_4f747510-de15-45d7-9b81-ad0561662c74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.344974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.344996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233-osd--block--5102d35b--39ce--5a2f--80bc--7bd1ce5c8233'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YA4ubi-Sdu3-XPZh-H0Ab-au4v-HCqU-auA1rk', 'scsi-0QEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6', 'scsi-SQEMU_QEMU_HARDDISK_87cd8e5e-a11a-492b-892b-8d669e416dd6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb-osd--block--cbe43cef--cccc--569d--93a4--8e7e2e8a94cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BOFdCw-FwIk-Rtak-6zoR-95RH-RmmV-m5b3wM', 'scsi-0QEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd', 'scsi-SQEMU_QEMU_HARDDISK_8fbee761-a5d2-4623-bf19-8989346ac6dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.345046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5', 'scsi-SQEMU_QEMU_HARDDISK_ea8f35dc-a35e-4089-a77c-db984e90bcf5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.345097 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.345116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.345137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:38:52.345152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part1', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part14', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part15', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part16', 'scsi-SQEMU_QEMU_HARDDISK_597da941-c896-4f50-9509-26c2623e2e81-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9205bfbb--9f4f--501b--85a3--60f418fff160-osd--block--9205bfbb--9f4f--501b--85a3--60f418fff160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1sO0lN-s91F-duyc-sh8W-xPDu-3cl6-erbJKd', 'scsi-0QEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d', 'scsi-SQEMU_QEMU_HARDDISK_fc6f372a-e295-454e-89b3-dab7283bde6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a8506d3--5e74--5dde--8df3--17f522800900-osd--block--5a8506d3--5e74--5dde--8df3--17f522800900'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ip3cp0-9I4g-dskc-D0U7-g73E-3FDn-ZxK0KN', 'scsi-0QEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9', 'scsi-SQEMU_QEMU_HARDDISK_cb506f9f-b661-4909-8304-0e1e52bc58d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5', 'scsi-SQEMU_QEMU_HARDDISK_9e74186d-4472-4929-8a20-9843938772e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:38:52.345247 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.345260 | orchestrator | 2025-03-23 13:38:52.345273 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-23 13:38:52.345286 | orchestrator | Sunday 23 March 2025 13:36:52 +0000 (0:00:00.668) 0:00:21.051 ********** 2025-03-23 13:38:52.345298 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-03-23 13:38:52.345311 | orchestrator | 2025-03-23 13:38:52.345323 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-23 13:38:52.345336 | orchestrator | Sunday 23 March 2025 13:36:53 +0000 (0:00:01.533) 0:00:22.585 ********** 2025-03-23 13:38:52.345348 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.345363 | orchestrator | 2025-03-23 13:38:52.345377 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-23 13:38:52.345389 | orchestrator | Sunday 23 March 2025 13:36:54 +0000 (0:00:00.199) 0:00:22.784 ********** 2025-03-23 13:38:52.345402 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.345416 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.345429 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.345441 | orchestrator | 2025-03-23 13:38:52.345454 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-23 13:38:52.345466 | orchestrator | Sunday 23 March 2025 13:36:54 +0000 (0:00:00.430) 0:00:23.215 ********** 2025-03-23 13:38:52.345479 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.345491 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.345504 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.345516 | orchestrator | 2025-03-23 13:38:52.345528 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-23 13:38:52.345541 | orchestrator | Sunday 23 March 2025 13:36:55 +0000 (0:00:00.756) 0:00:23.971 ********** 2025-03-23 13:38:52.345553 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.345566 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.345578 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.345590 | orchestrator | 2025-03-23 13:38:52.345603 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:38:52.345622 | orchestrator | Sunday 23 March 2025 13:36:55 +0000 (0:00:00.344) 0:00:24.316 ********** 2025-03-23 13:38:52.345648 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.345662 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.345674 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.345686 | orchestrator | 2025-03-23 13:38:52.345699 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:38:52.345711 | orchestrator | Sunday 23 March 2025 13:36:56 +0000 (0:00:00.996) 0:00:25.313 ********** 2025-03-23 13:38:52.345724 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.345736 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.345749 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.345761 | orchestrator | 2025-03-23 13:38:52.345774 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:38:52.345786 | orchestrator | Sunday 23 March 2025 13:36:57 +0000 (0:00:00.409) 0:00:25.722 ********** 2025-03-23 13:38:52.345799 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.345811 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.345823 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.345835 | orchestrator | 2025-03-23 13:38:52.345848 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:38:52.345860 | orchestrator | Sunday 23 March 2025 13:36:57 +0000 (0:00:00.464) 0:00:26.187 ********** 2025-03-23 13:38:52.345872 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.345884 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.345897 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.345909 | orchestrator | 2025-03-23 13:38:52.345921 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-23 13:38:52.345933 | orchestrator | Sunday 23 March 2025 13:36:57 +0000 (0:00:00.400) 0:00:26.588 ********** 2025-03-23 13:38:52.345946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:38:52.345959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:38:52.345971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:38:52.345983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:38:52.345995 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.346013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:38:52.346053 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:38:52.346065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:38:52.346077 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:38:52.346090 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.346102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:38:52.346115 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.346127 | orchestrator | 2025-03-23 13:38:52.346139 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-23 13:38:52.346157 | orchestrator | Sunday 23 March 2025 13:36:58 +0000 (0:00:00.958) 0:00:27.546 ********** 2025-03-23 13:38:52.346170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:38:52.346182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:38:52.346195 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:38:52.346207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:38:52.346219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:38:52.346231 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.346243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:38:52.346255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:38:52.346267 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.346280 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:38:52.346298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:38:52.346311 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.346323 | orchestrator | 2025-03-23 13:38:52.346335 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-23 13:38:52.346352 | orchestrator | Sunday 23 March 2025 13:37:00 +0000 (0:00:01.141) 0:00:28.688 ********** 2025-03-23 13:38:52.346365 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-03-23 13:38:52.346377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-03-23 13:38:52.346390 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-03-23 13:38:52.346402 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-03-23 13:38:52.346414 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-03-23 13:38:52.346426 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-03-23 13:38:52.346439 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-03-23 13:38:52.346451 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-03-23 13:38:52.346463 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-03-23 13:38:52.346475 | orchestrator | 2025-03-23 13:38:52.346487 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-23 13:38:52.346500 | orchestrator | Sunday 23 March 2025 13:37:01 +0000 (0:00:01.666) 0:00:30.354 ********** 2025-03-23 13:38:52.346512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:38:52.346524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:38:52.346536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:38:52.346548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:38:52.346560 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:38:52.346573 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:38:52.346585 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.346597 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.346609 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:38:52.346622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:38:52.346634 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:38:52.346661 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.346674 | orchestrator | 2025-03-23 13:38:52.346686 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-23 13:38:52.346699 | orchestrator | Sunday 23 March 2025 13:37:02 +0000 (0:00:00.693) 0:00:31.048 ********** 2025-03-23 13:38:52.346711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-23 13:38:52.346723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-23 13:38:52.346736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-23 13:38:52.346748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-23 13:38:52.346760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-23 13:38:52.346772 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-23 13:38:52.346785 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.346797 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.346809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-23 13:38:52.346822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-23 13:38:52.346834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-23 13:38:52.346846 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.346858 | orchestrator | 2025-03-23 13:38:52.346870 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-23 13:38:52.346882 | orchestrator | Sunday 23 March 2025 13:37:02 +0000 (0:00:00.502) 0:00:31.550 ********** 2025-03-23 13:38:52.346895 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:38:52.346913 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:38:52.346926 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:38:52.346938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:38:52.346951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:38:52.346963 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.346975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:38:52.346988 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347000 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-23 13:38:52.347017 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:38:52.347030 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:38:52.347042 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347055 | orchestrator | 2025-03-23 13:38:52.347067 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-23 13:38:52.347079 | orchestrator | Sunday 23 March 2025 13:37:03 +0000 (0:00:00.419) 0:00:31.969 ********** 2025-03-23 13:38:52.347092 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:38:52.347104 | orchestrator | 2025-03-23 13:38:52.347117 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-23 13:38:52.347129 | orchestrator | Sunday 23 March 2025 13:37:04 +0000 (0:00:00.816) 0:00:32.786 ********** 2025-03-23 13:38:52.347142 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347154 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347166 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347179 | orchestrator | 2025-03-23 13:38:52.347191 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-23 13:38:52.347204 | orchestrator | Sunday 23 March 2025 13:37:04 +0000 (0:00:00.385) 0:00:33.172 ********** 2025-03-23 13:38:52.347216 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347228 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347240 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347253 | orchestrator | 2025-03-23 13:38:52.347265 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-23 13:38:52.347278 | orchestrator | Sunday 23 March 2025 13:37:04 +0000 (0:00:00.337) 0:00:33.509 ********** 2025-03-23 13:38:52.347290 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347302 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347314 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347326 | orchestrator | 2025-03-23 13:38:52.347339 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-23 13:38:52.347351 | orchestrator | Sunday 23 March 2025 13:37:05 +0000 (0:00:00.346) 0:00:33.856 ********** 2025-03-23 13:38:52.347363 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.347376 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.347388 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.347400 | orchestrator | 2025-03-23 13:38:52.347412 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-23 13:38:52.347424 | orchestrator | Sunday 23 March 2025 13:37:05 +0000 (0:00:00.680) 0:00:34.537 ********** 2025-03-23 13:38:52.347436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:38:52.347449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:38:52.347461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:38:52.347483 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347495 | orchestrator | 2025-03-23 13:38:52.347508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-23 13:38:52.347520 | orchestrator | Sunday 23 March 2025 13:37:06 +0000 (0:00:00.439) 0:00:34.976 ********** 2025-03-23 13:38:52.347532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:38:52.347544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:38:52.347561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:38:52.347574 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347591 | orchestrator | 2025-03-23 13:38:52.347603 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-23 13:38:52.347616 | orchestrator | Sunday 23 March 2025 13:37:06 +0000 (0:00:00.438) 0:00:35.415 ********** 2025-03-23 13:38:52.347628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:38:52.347678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:38:52.347692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:38:52.347704 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347716 | orchestrator | 2025-03-23 13:38:52.347729 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:38:52.347741 | orchestrator | Sunday 23 March 2025 13:37:07 +0000 (0:00:00.473) 0:00:35.889 ********** 2025-03-23 13:38:52.347754 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:38:52.347765 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:38:52.347775 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:38:52.347785 | orchestrator | 2025-03-23 13:38:52.347795 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-23 13:38:52.347805 | orchestrator | Sunday 23 March 2025 13:37:07 +0000 (0:00:00.368) 0:00:36.257 ********** 2025-03-23 13:38:52.347816 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-23 13:38:52.347826 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-23 13:38:52.347836 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-23 13:38:52.347847 | orchestrator | 2025-03-23 13:38:52.347857 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-23 13:38:52.347867 | orchestrator | Sunday 23 March 2025 13:37:08 +0000 (0:00:01.059) 0:00:37.316 ********** 2025-03-23 13:38:52.347877 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347887 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347897 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347907 | orchestrator | 2025-03-23 13:38:52.347918 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-23 13:38:52.347928 | orchestrator | Sunday 23 March 2025 13:37:09 +0000 (0:00:00.314) 0:00:37.630 ********** 2025-03-23 13:38:52.347938 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.347948 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.347958 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.347968 | orchestrator | 2025-03-23 13:38:52.347978 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-23 13:38:52.347992 | orchestrator | Sunday 23 March 2025 13:37:09 +0000 (0:00:00.378) 0:00:38.009 ********** 2025-03-23 13:38:52.348003 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-23 13:38:52.348018 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.348028 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-23 13:38:52.348039 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.348049 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-23 13:38:52.348059 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.348069 | orchestrator | 2025-03-23 13:38:52.348080 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-23 13:38:52.348090 | orchestrator | Sunday 23 March 2025 13:37:09 +0000 (0:00:00.506) 0:00:38.515 ********** 2025-03-23 13:38:52.348100 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-23 13:38:52.348115 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.348126 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-23 13:38:52.348136 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.348146 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-23 13:38:52.348156 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.348166 | orchestrator | 2025-03-23 13:38:52.348176 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-23 13:38:52.348186 | orchestrator | Sunday 23 March 2025 13:37:10 +0000 (0:00:00.648) 0:00:39.163 ********** 2025-03-23 13:38:52.348196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-23 13:38:52.348206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-23 13:38:52.348216 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-23 13:38:52.348226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-23 13:38:52.348236 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-23 13:38:52.348246 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.348257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-23 13:38:52.348266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-23 13:38:52.348276 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.348286 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-23 13:38:52.348296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-23 13:38:52.348306 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.348316 | orchestrator | 2025-03-23 13:38:52.348326 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-23 13:38:52.348336 | orchestrator | Sunday 23 March 2025 13:37:11 +0000 (0:00:00.725) 0:00:39.888 ********** 2025-03-23 13:38:52.348346 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.348356 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.348366 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:38:52.348376 | orchestrator | 2025-03-23 13:38:52.348386 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-23 13:38:52.348396 | orchestrator | Sunday 23 March 2025 13:37:11 +0000 (0:00:00.351) 0:00:40.240 ********** 2025-03-23 13:38:52.348406 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-23 13:38:52.348416 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:38:52.348426 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:38:52.348436 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-03-23 13:38:52.348446 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:38:52.348456 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:38:52.348466 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:38:52.348476 | orchestrator | 2025-03-23 13:38:52.348486 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-23 13:38:52.348496 | orchestrator | Sunday 23 March 2025 13:37:12 +0000 (0:00:01.277) 0:00:41.518 ********** 2025-03-23 13:38:52.348506 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-23 13:38:52.348516 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:38:52.348526 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:38:52.348536 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-03-23 13:38:52.348550 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:38:52.348561 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:38:52.348571 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:38:52.348581 | orchestrator | 2025-03-23 13:38:52.348591 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-03-23 13:38:52.348781 | orchestrator | Sunday 23 March 2025 13:37:15 +0000 (0:00:02.114) 0:00:43.632 ********** 2025-03-23 13:38:52.348794 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:38:52.348804 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:38:52.348815 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-03-23 13:38:52.348825 | orchestrator | 2025-03-23 13:38:52.348835 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-03-23 13:38:52.348850 | orchestrator | Sunday 23 March 2025 13:37:15 +0000 (0:00:00.633) 0:00:44.266 ********** 2025-03-23 13:38:52.348862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:38:52.348873 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:38:52.348884 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:38:52.348894 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:38:52.348905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-23 13:38:52.348915 | orchestrator | 2025-03-23 13:38:52.348925 | orchestrator | TASK [generate keys] *********************************************************** 2025-03-23 13:38:52.348935 | orchestrator | Sunday 23 March 2025 13:37:58 +0000 (0:00:42.516) 0:01:26.782 ********** 2025-03-23 13:38:52.348946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.348956 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.348966 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.348979 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.348990 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349010 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-03-23 13:38:52.349020 | orchestrator | 2025-03-23 13:38:52.349030 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-03-23 13:38:52.349040 | orchestrator | Sunday 23 March 2025 13:38:19 +0000 (0:00:21.297) 0:01:48.080 ********** 2025-03-23 13:38:52.349050 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349066 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349076 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349086 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349096 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349116 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-23 13:38:52.349126 | orchestrator | 2025-03-23 13:38:52.349136 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-03-23 13:38:52.349146 | orchestrator | Sunday 23 March 2025 13:38:29 +0000 (0:00:10.161) 0:01:58.242 ********** 2025-03-23 13:38:52.349157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349167 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:52.349177 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:52.349187 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349197 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:52.349207 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:52.349217 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349227 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:52.349237 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:52.349247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:52.349257 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:52.349271 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:55.388132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:55.388240 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:55.388257 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:55.388271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-23 13:38:55.388285 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-23 13:38:55.388300 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-23 13:38:55.388315 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-03-23 13:38:55.388330 | orchestrator | 2025-03-23 13:38:55.388345 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:38:55.388360 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-23 13:38:55.388376 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-03-23 13:38:55.388524 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-03-23 13:38:55.388544 | orchestrator | 2025-03-23 13:38:55.388558 | orchestrator | 2025-03-23 13:38:55.388572 | orchestrator | 2025-03-23 13:38:55.388591 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:38:55.388605 | orchestrator | Sunday 23 March 2025 13:38:49 +0000 (0:00:19.745) 0:02:17.988 ********** 2025-03-23 13:38:55.388700 | orchestrator | =============================================================================== 2025-03-23 13:38:55.388716 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.52s 2025-03-23 13:38:55.388730 | orchestrator | generate keys ---------------------------------------------------------- 21.30s 2025-03-23 13:38:55.388744 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.75s 2025-03-23 13:38:55.388758 | orchestrator | get keys from monitors ------------------------------------------------- 10.16s 2025-03-23 13:38:55.388773 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.60s 2025-03-23 13:38:55.388787 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.11s 2025-03-23 13:38:55.388800 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.67s 2025-03-23 13:38:55.388814 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.53s 2025-03-23 13:38:55.388898 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.49s 2025-03-23 13:38:55.388916 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.28s 2025-03-23 13:38:55.388930 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 1.14s 2025-03-23 13:38:55.388944 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.12s 2025-03-23 13:38:55.388958 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.06s 2025-03-23 13:38:55.388972 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.00s 2025-03-23 13:38:55.388986 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.96s 2025-03-23 13:38:55.389000 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.94s 2025-03-23 13:38:55.389014 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.87s 2025-03-23 13:38:55.389028 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.82s 2025-03-23 13:38:55.389042 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.81s 2025-03-23 13:38:55.389056 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.76s 2025-03-23 13:38:55.389070 | orchestrator | 2025-03-23 13:38:52 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:55.389084 | orchestrator | 2025-03-23 13:38:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:55.389116 | orchestrator | 2025-03-23 13:38:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:55.389917 | orchestrator | 2025-03-23 13:38:55 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:55.389945 | orchestrator | 2025-03-23 13:38:55 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:38:55.389966 | orchestrator | 2025-03-23 13:38:55 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:58.426727 | orchestrator | 2025-03-23 13:38:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:38:58.426844 | orchestrator | 2025-03-23 13:38:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:38:58.427929 | orchestrator | 2025-03-23 13:38:58 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:38:58.430352 | orchestrator | 2025-03-23 13:38:58 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:38:58.432176 | orchestrator | 2025-03-23 13:38:58 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:38:58.432266 | orchestrator | 2025-03-23 13:38:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:01.502210 | orchestrator | 2025-03-23 13:39:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:01.502670 | orchestrator | 2025-03-23 13:39:01 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:01.505080 | orchestrator | 2025-03-23 13:39:01 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:39:01.512093 | orchestrator | 2025-03-23 13:39:01 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:01.514166 | orchestrator | 2025-03-23 13:39:01 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:01.514223 | orchestrator | 2025-03-23 13:39:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:04.555281 | orchestrator | 2025-03-23 13:39:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:04.555800 | orchestrator | 2025-03-23 13:39:04 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:04.557610 | orchestrator | 2025-03-23 13:39:04 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state STARTED 2025-03-23 13:39:04.559886 | orchestrator | 2025-03-23 13:39:04 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:04.561404 | orchestrator | 2025-03-23 13:39:04 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:04.561800 | orchestrator | 2025-03-23 13:39:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:07.611895 | orchestrator | 2025-03-23 13:39:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:07.612848 | orchestrator | 2025-03-23 13:39:07 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:07.615261 | orchestrator | 2025-03-23 13:39:07 | INFO  | Task 98ec53a5-5820-4e5f-84c7-47e176349e66 is in state SUCCESS 2025-03-23 13:39:07.617223 | orchestrator | 2025-03-23 13:39:07.617262 | orchestrator | 2025-03-23 13:39:07.617278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:39:07.617293 | orchestrator | 2025-03-23 13:39:07.617308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:39:07.617322 | orchestrator | Sunday 23 March 2025 13:37:19 +0000 (0:00:00.414) 0:00:00.414 ********** 2025-03-23 13:39:07.617336 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.617353 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.617367 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.617381 | orchestrator | 2025-03-23 13:39:07.617395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:39:07.617410 | orchestrator | Sunday 23 March 2025 13:37:19 +0000 (0:00:00.453) 0:00:00.868 ********** 2025-03-23 13:39:07.617425 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-03-23 13:39:07.617439 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-03-23 13:39:07.617453 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-03-23 13:39:07.617467 | orchestrator | 2025-03-23 13:39:07.617482 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-03-23 13:39:07.617496 | orchestrator | 2025-03-23 13:39:07.617510 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-23 13:39:07.617525 | orchestrator | Sunday 23 March 2025 13:37:20 +0000 (0:00:00.348) 0:00:01.216 ********** 2025-03-23 13:39:07.617540 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:39:07.617914 | orchestrator | 2025-03-23 13:39:07.617934 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-03-23 13:39:07.617963 | orchestrator | Sunday 23 March 2025 13:37:21 +0000 (0:00:00.825) 0:00:02.042 ********** 2025-03-23 13:39:07.617982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.618085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.618116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.618133 | orchestrator | 2025-03-23 13:39:07.618147 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-03-23 13:39:07.618161 | orchestrator | Sunday 23 March 2025 13:37:22 +0000 (0:00:01.425) 0:00:03.467 ********** 2025-03-23 13:39:07.618175 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.618190 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.618204 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.618218 | orchestrator | 2025-03-23 13:39:07.618233 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-23 13:39:07.618247 | orchestrator | Sunday 23 March 2025 13:37:22 +0000 (0:00:00.298) 0:00:03.766 ********** 2025-03-23 13:39:07.618269 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-23 13:39:07.618284 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-03-23 13:39:07.618298 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-03-23 13:39:07.618312 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-03-23 13:39:07.618326 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-03-23 13:39:07.618340 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-03-23 13:39:07.618354 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-03-23 13:39:07.618367 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-23 13:39:07.618381 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-03-23 13:39:07.618402 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-03-23 13:39:07.618416 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-03-23 13:39:07.618430 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-03-23 13:39:07.618443 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-03-23 13:39:07.618457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-03-23 13:39:07.618471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-23 13:39:07.618484 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-03-23 13:39:07.618498 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-03-23 13:39:07.618512 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-03-23 13:39:07.618526 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-03-23 13:39:07.618539 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-03-23 13:39:07.618553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-03-23 13:39:07.618568 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-03-23 13:39:07.618583 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-03-23 13:39:07.618597 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-03-23 13:39:07.618611 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-03-23 13:39:07.618625 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-03-23 13:39:07.618664 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-03-23 13:39:07.618680 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-03-23 13:39:07.618694 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-03-23 13:39:07.618708 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-03-23 13:39:07.618721 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-03-23 13:39:07.618735 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-03-23 13:39:07.618749 | orchestrator | 2025-03-23 13:39:07.618763 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.618777 | orchestrator | Sunday 23 March 2025 13:37:23 +0000 (0:00:01.047) 0:00:04.813 ********** 2025-03-23 13:39:07.618791 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.618805 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.618818 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.618833 | orchestrator | 2025-03-23 13:39:07.618846 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.618868 | orchestrator | Sunday 23 March 2025 13:37:24 +0000 (0:00:00.581) 0:00:05.395 ********** 2025-03-23 13:39:07.618881 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.618896 | orchestrator | 2025-03-23 13:39:07.618915 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.618930 | orchestrator | Sunday 23 March 2025 13:37:24 +0000 (0:00:00.189) 0:00:05.584 ********** 2025-03-23 13:39:07.618944 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.618958 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.618972 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.618986 | orchestrator | 2025-03-23 13:39:07.619006 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.619021 | orchestrator | Sunday 23 March 2025 13:37:25 +0000 (0:00:00.498) 0:00:06.082 ********** 2025-03-23 13:39:07.619035 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.619048 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.619062 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.619076 | orchestrator | 2025-03-23 13:39:07.619090 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.619103 | orchestrator | Sunday 23 March 2025 13:37:25 +0000 (0:00:00.344) 0:00:06.427 ********** 2025-03-23 13:39:07.619117 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619277 | orchestrator | 2025-03-23 13:39:07.619297 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.619312 | orchestrator | Sunday 23 March 2025 13:37:25 +0000 (0:00:00.288) 0:00:06.716 ********** 2025-03-23 13:39:07.619326 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619340 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.619354 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.619367 | orchestrator | 2025-03-23 13:39:07.619381 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.619395 | orchestrator | Sunday 23 March 2025 13:37:26 +0000 (0:00:00.352) 0:00:07.068 ********** 2025-03-23 13:39:07.619409 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.619423 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.619436 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.619450 | orchestrator | 2025-03-23 13:39:07.619464 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.619478 | orchestrator | Sunday 23 March 2025 13:37:26 +0000 (0:00:00.507) 0:00:07.576 ********** 2025-03-23 13:39:07.619491 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619519 | orchestrator | 2025-03-23 13:39:07.619533 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.619547 | orchestrator | Sunday 23 March 2025 13:37:26 +0000 (0:00:00.118) 0:00:07.694 ********** 2025-03-23 13:39:07.619561 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619575 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.619588 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.619602 | orchestrator | 2025-03-23 13:39:07.619616 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.619630 | orchestrator | Sunday 23 March 2025 13:37:27 +0000 (0:00:00.508) 0:00:08.203 ********** 2025-03-23 13:39:07.619667 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.619681 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.619695 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.619709 | orchestrator | 2025-03-23 13:39:07.619723 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.619737 | orchestrator | Sunday 23 March 2025 13:37:27 +0000 (0:00:00.468) 0:00:08.672 ********** 2025-03-23 13:39:07.619751 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619765 | orchestrator | 2025-03-23 13:39:07.619779 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.619792 | orchestrator | Sunday 23 March 2025 13:37:27 +0000 (0:00:00.147) 0:00:08.819 ********** 2025-03-23 13:39:07.619815 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.619829 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.619843 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.619856 | orchestrator | 2025-03-23 13:39:07.619870 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.619884 | orchestrator | Sunday 23 March 2025 13:37:28 +0000 (0:00:00.439) 0:00:09.258 ********** 2025-03-23 13:39:07.619898 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.619912 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.619928 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.619943 | orchestrator | 2025-03-23 13:39:07.619958 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.619974 | orchestrator | Sunday 23 March 2025 13:37:28 +0000 (0:00:00.313) 0:00:09.572 ********** 2025-03-23 13:39:07.619990 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620005 | orchestrator | 2025-03-23 13:39:07.620020 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.620036 | orchestrator | Sunday 23 March 2025 13:37:28 +0000 (0:00:00.260) 0:00:09.833 ********** 2025-03-23 13:39:07.620051 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620067 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.620082 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.620098 | orchestrator | 2025-03-23 13:39:07.620113 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.620128 | orchestrator | Sunday 23 March 2025 13:37:29 +0000 (0:00:00.358) 0:00:10.192 ********** 2025-03-23 13:39:07.620144 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.620159 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.620175 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.620191 | orchestrator | 2025-03-23 13:39:07.620206 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.620222 | orchestrator | Sunday 23 March 2025 13:37:29 +0000 (0:00:00.543) 0:00:10.735 ********** 2025-03-23 13:39:07.620237 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620251 | orchestrator | 2025-03-23 13:39:07.620265 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.620279 | orchestrator | Sunday 23 March 2025 13:37:29 +0000 (0:00:00.126) 0:00:10.862 ********** 2025-03-23 13:39:07.620293 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620307 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.620321 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.620335 | orchestrator | 2025-03-23 13:39:07.620349 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.620367 | orchestrator | Sunday 23 March 2025 13:37:30 +0000 (0:00:00.505) 0:00:11.368 ********** 2025-03-23 13:39:07.620389 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.620404 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.620418 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.620432 | orchestrator | 2025-03-23 13:39:07.620446 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.620460 | orchestrator | Sunday 23 March 2025 13:37:31 +0000 (0:00:00.572) 0:00:11.940 ********** 2025-03-23 13:39:07.620474 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620487 | orchestrator | 2025-03-23 13:39:07.620501 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.620515 | orchestrator | Sunday 23 March 2025 13:37:31 +0000 (0:00:00.151) 0:00:12.091 ********** 2025-03-23 13:39:07.620528 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620542 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.620556 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.620570 | orchestrator | 2025-03-23 13:39:07.620583 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.620597 | orchestrator | Sunday 23 March 2025 13:37:31 +0000 (0:00:00.477) 0:00:12.569 ********** 2025-03-23 13:39:07.620611 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.620632 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.620692 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.620708 | orchestrator | 2025-03-23 13:39:07.620722 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.620736 | orchestrator | Sunday 23 March 2025 13:37:32 +0000 (0:00:00.515) 0:00:13.085 ********** 2025-03-23 13:39:07.620750 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620764 | orchestrator | 2025-03-23 13:39:07.620778 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.620792 | orchestrator | Sunday 23 March 2025 13:37:32 +0000 (0:00:00.147) 0:00:13.233 ********** 2025-03-23 13:39:07.620805 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620819 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.620833 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.620847 | orchestrator | 2025-03-23 13:39:07.620861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.620875 | orchestrator | Sunday 23 March 2025 13:37:32 +0000 (0:00:00.435) 0:00:13.668 ********** 2025-03-23 13:39:07.620889 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.620903 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.620917 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.620931 | orchestrator | 2025-03-23 13:39:07.620945 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.620959 | orchestrator | Sunday 23 March 2025 13:37:33 +0000 (0:00:00.370) 0:00:14.039 ********** 2025-03-23 13:39:07.620973 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.620987 | orchestrator | 2025-03-23 13:39:07.621001 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.621015 | orchestrator | Sunday 23 March 2025 13:37:33 +0000 (0:00:00.151) 0:00:14.191 ********** 2025-03-23 13:39:07.621028 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621043 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.621056 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.621070 | orchestrator | 2025-03-23 13:39:07.621084 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.621098 | orchestrator | Sunday 23 March 2025 13:37:33 +0000 (0:00:00.471) 0:00:14.662 ********** 2025-03-23 13:39:07.621112 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.621126 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.621140 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.621154 | orchestrator | 2025-03-23 13:39:07.621168 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.621182 | orchestrator | Sunday 23 March 2025 13:37:34 +0000 (0:00:00.525) 0:00:15.188 ********** 2025-03-23 13:39:07.621196 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621210 | orchestrator | 2025-03-23 13:39:07.621224 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.621238 | orchestrator | Sunday 23 March 2025 13:37:34 +0000 (0:00:00.127) 0:00:15.315 ********** 2025-03-23 13:39:07.621252 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621266 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.621285 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.621300 | orchestrator | 2025-03-23 13:39:07.621313 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-23 13:39:07.621327 | orchestrator | Sunday 23 March 2025 13:37:34 +0000 (0:00:00.477) 0:00:15.793 ********** 2025-03-23 13:39:07.621341 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:07.621355 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:39:07.621369 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:39:07.621383 | orchestrator | 2025-03-23 13:39:07.621397 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-23 13:39:07.621411 | orchestrator | Sunday 23 March 2025 13:37:35 +0000 (0:00:00.523) 0:00:16.317 ********** 2025-03-23 13:39:07.621424 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621438 | orchestrator | 2025-03-23 13:39:07.621460 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-23 13:39:07.621474 | orchestrator | Sunday 23 March 2025 13:37:35 +0000 (0:00:00.156) 0:00:16.473 ********** 2025-03-23 13:39:07.621488 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621502 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.621516 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.621530 | orchestrator | 2025-03-23 13:39:07.621544 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-03-23 13:39:07.621558 | orchestrator | Sunday 23 March 2025 13:37:36 +0000 (0:00:00.470) 0:00:16.943 ********** 2025-03-23 13:39:07.621572 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:39:07.621585 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:39:07.621599 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:39:07.621613 | orchestrator | 2025-03-23 13:39:07.621627 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-03-23 13:39:07.621663 | orchestrator | Sunday 23 March 2025 13:37:38 +0000 (0:00:02.873) 0:00:19.817 ********** 2025-03-23 13:39:07.621679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-23 13:39:07.621699 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-23 13:39:07.621713 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-23 13:39:07.621727 | orchestrator | 2025-03-23 13:39:07.621741 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-03-23 13:39:07.621755 | orchestrator | Sunday 23 March 2025 13:37:42 +0000 (0:00:03.618) 0:00:23.436 ********** 2025-03-23 13:39:07.621769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-23 13:39:07.621784 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-23 13:39:07.621797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-23 13:39:07.621811 | orchestrator | 2025-03-23 13:39:07.621825 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-03-23 13:39:07.621839 | orchestrator | Sunday 23 March 2025 13:37:46 +0000 (0:00:03.740) 0:00:27.176 ********** 2025-03-23 13:39:07.621853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-23 13:39:07.621867 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-23 13:39:07.621880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-23 13:39:07.621894 | orchestrator | 2025-03-23 13:39:07.621908 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-03-23 13:39:07.621922 | orchestrator | Sunday 23 March 2025 13:37:48 +0000 (0:00:02.385) 0:00:29.562 ********** 2025-03-23 13:39:07.621935 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.621949 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.621963 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.621977 | orchestrator | 2025-03-23 13:39:07.621991 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-03-23 13:39:07.622005 | orchestrator | Sunday 23 March 2025 13:37:48 +0000 (0:00:00.325) 0:00:29.887 ********** 2025-03-23 13:39:07.622044 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.622062 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.622075 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.622089 | orchestrator | 2025-03-23 13:39:07.622103 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-23 13:39:07.622117 | orchestrator | Sunday 23 March 2025 13:37:49 +0000 (0:00:00.583) 0:00:30.471 ********** 2025-03-23 13:39:07.622131 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:39:07.622155 | orchestrator | 2025-03-23 13:39:07.622169 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-03-23 13:39:07.622183 | orchestrator | Sunday 23 March 2025 13:37:50 +0000 (0:00:00.967) 0:00:31.438 ********** 2025-03-23 13:39:07.622206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622271 | orchestrator | 2025-03-23 13:39:07.622286 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-03-23 13:39:07.622300 | orchestrator | Sunday 23 March 2025 13:37:52 +0000 (0:00:01.712) 0:00:33.151 ********** 2025-03-23 13:39:07.622315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622336 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.622360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622376 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.622390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622418 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.622433 | orchestrator | 2025-03-23 13:39:07.622446 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-03-23 13:39:07.622461 | orchestrator | Sunday 23 March 2025 13:37:53 +0000 (0:00:00.859) 0:00:34.011 ********** 2025-03-23 13:39:07.622484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622507 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.622522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622537 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.622562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 13:39:07.622584 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.622599 | orchestrator | 2025-03-23 13:39:07.622613 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-03-23 13:39:07.622627 | orchestrator | Sunday 23 March 2025 13:37:54 +0000 (0:00:01.410) 0:00:35.421 ********** 2025-03-23 13:39:07.622717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 13:39:07.622802 | orchestrator | 2025-03-23 13:39:07.622816 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-23 13:39:07.622830 | orchestrator | Sunday 23 March 2025 13:38:01 +0000 (0:00:07.152) 0:00:42.574 ********** 2025-03-23 13:39:07.622844 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:07.622858 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:39:07.622872 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:39:07.622886 | orchestrator | 2025-03-23 13:39:07.622900 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-23 13:39:07.622914 | orchestrator | Sunday 23 March 2025 13:38:02 +0000 (0:00:00.524) 0:00:43.098 ********** 2025-03-23 13:39:07.622928 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:39:07.622951 | orchestrator | 2025-03-23 13:39:07.622963 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-03-23 13:39:07.622976 | orchestrator | Sunday 23 March 2025 13:38:02 +0000 (0:00:00.742) 0:00:43.840 ********** 2025-03-23 13:39:07.622988 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:39:07.623000 | orchestrator | 2025-03-23 13:39:07.623012 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-03-23 13:39:07.623025 | orchestrator | Sunday 23 March 2025 13:38:05 +0000 (0:00:02.885) 0:00:46.726 ********** 2025-03-23 13:39:07.623037 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:39:07.623049 | orchestrator | 2025-03-23 13:39:07.623062 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-03-23 13:39:07.623074 | orchestrator | Sunday 23 March 2025 13:38:08 +0000 (0:00:02.532) 0:00:49.259 ********** 2025-03-23 13:39:07.623087 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:39:07.623099 | orchestrator | 2025-03-23 13:39:07.623111 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-23 13:39:07.623123 | orchestrator | Sunday 23 March 2025 13:38:23 +0000 (0:00:15.182) 0:01:04.442 ********** 2025-03-23 13:39:07.623136 | orchestrator | 2025-03-23 13:39:07.623148 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-23 13:39:07.623166 | orchestrator | Sunday 23 March 2025 13:38:23 +0000 (0:00:00.072) 0:01:04.514 ********** 2025-03-23 13:39:07.623178 | orchestrator | 2025-03-23 13:39:07.623191 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-23 13:39:07.623203 | orchestrator | Sunday 23 March 2025 13:38:23 +0000 (0:00:00.225) 0:01:04.740 ********** 2025-03-23 13:39:07.623215 | orchestrator | 2025-03-23 13:39:07.623227 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-03-23 13:39:07.623240 | orchestrator | Sunday 23 March 2025 13:38:23 +0000 (0:00:00.075) 0:01:04.815 ********** 2025-03-23 13:39:07.623252 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:39:07.623265 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:39:07.623277 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:39:07.623289 | orchestrator | 2025-03-23 13:39:07.623302 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:39:07.623314 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-23 13:39:07.623327 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-23 13:39:07.623339 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-23 13:39:07.623351 | orchestrator | 2025-03-23 13:39:07.623364 | orchestrator | 2025-03-23 13:39:07.623376 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:39:07.623388 | orchestrator | Sunday 23 March 2025 13:39:06 +0000 (0:00:42.870) 0:01:47.685 ********** 2025-03-23 13:39:07.623401 | orchestrator | =============================================================================== 2025-03-23 13:39:07.623413 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.87s 2025-03-23 13:39:07.623425 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.18s 2025-03-23 13:39:07.623438 | orchestrator | horizon : Deploy horizon container -------------------------------------- 7.15s 2025-03-23 13:39:07.623450 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.74s 2025-03-23 13:39:07.623462 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.62s 2025-03-23 13:39:07.623474 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.89s 2025-03-23 13:39:07.623486 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.87s 2025-03-23 13:39:07.623505 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.53s 2025-03-23 13:39:07.623517 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.39s 2025-03-23 13:39:07.623529 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.71s 2025-03-23 13:39:07.623542 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.43s 2025-03-23 13:39:07.623554 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.41s 2025-03-23 13:39:07.623566 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.05s 2025-03-23 13:39:07.623583 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.97s 2025-03-23 13:39:10.667383 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.86s 2025-03-23 13:39:10.667479 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-03-23 13:39:10.667497 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-03-23 13:39:10.667511 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.58s 2025-03-23 13:39:10.667526 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-03-23 13:39:10.667540 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-03-23 13:39:10.667554 | orchestrator | 2025-03-23 13:39:07 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:10.667568 | orchestrator | 2025-03-23 13:39:07 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:10.667582 | orchestrator | 2025-03-23 13:39:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:10.667610 | orchestrator | 2025-03-23 13:39:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:10.668113 | orchestrator | 2025-03-23 13:39:10 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:10.672025 | orchestrator | 2025-03-23 13:39:10 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:10.673118 | orchestrator | 2025-03-23 13:39:10 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:13.718104 | orchestrator | 2025-03-23 13:39:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:13.718220 | orchestrator | 2025-03-23 13:39:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:13.720429 | orchestrator | 2025-03-23 13:39:13 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:13.723697 | orchestrator | 2025-03-23 13:39:13 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:13.724875 | orchestrator | 2025-03-23 13:39:13 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:16.773635 | orchestrator | 2025-03-23 13:39:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:16.773805 | orchestrator | 2025-03-23 13:39:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:16.774682 | orchestrator | 2025-03-23 13:39:16 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:16.776706 | orchestrator | 2025-03-23 13:39:16 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:16.777817 | orchestrator | 2025-03-23 13:39:16 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:19.822367 | orchestrator | 2025-03-23 13:39:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:19.822509 | orchestrator | 2025-03-23 13:39:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:19.823042 | orchestrator | 2025-03-23 13:39:19 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:19.823588 | orchestrator | 2025-03-23 13:39:19 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:19.824505 | orchestrator | 2025-03-23 13:39:19 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:19.824619 | orchestrator | 2025-03-23 13:39:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:22.873152 | orchestrator | 2025-03-23 13:39:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:22.874138 | orchestrator | 2025-03-23 13:39:22 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:22.875873 | orchestrator | 2025-03-23 13:39:22 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:22.876985 | orchestrator | 2025-03-23 13:39:22 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:22.877215 | orchestrator | 2025-03-23 13:39:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:25.937415 | orchestrator | 2025-03-23 13:39:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:25.940509 | orchestrator | 2025-03-23 13:39:25 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:25.942999 | orchestrator | 2025-03-23 13:39:25 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:25.944750 | orchestrator | 2025-03-23 13:39:25 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:25.946566 | orchestrator | 2025-03-23 13:39:25 | INFO  | Task 067adc97-47bc-420e-95c2-7767430206df is in state STARTED 2025-03-23 13:39:29.019836 | orchestrator | 2025-03-23 13:39:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:29.019963 | orchestrator | 2025-03-23 13:39:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:29.021420 | orchestrator | 2025-03-23 13:39:29 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:29.023285 | orchestrator | 2025-03-23 13:39:29 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:29.024775 | orchestrator | 2025-03-23 13:39:29 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:29.025849 | orchestrator | 2025-03-23 13:39:29 | INFO  | Task 067adc97-47bc-420e-95c2-7767430206df is in state STARTED 2025-03-23 13:39:32.077235 | orchestrator | 2025-03-23 13:39:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:32.077360 | orchestrator | 2025-03-23 13:39:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:32.077969 | orchestrator | 2025-03-23 13:39:32 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:32.079296 | orchestrator | 2025-03-23 13:39:32 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:32.080780 | orchestrator | 2025-03-23 13:39:32 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:32.082124 | orchestrator | 2025-03-23 13:39:32 | INFO  | Task 067adc97-47bc-420e-95c2-7767430206df is in state STARTED 2025-03-23 13:39:35.134580 | orchestrator | 2025-03-23 13:39:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:35.134744 | orchestrator | 2025-03-23 13:39:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:35.135398 | orchestrator | 2025-03-23 13:39:35 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state STARTED 2025-03-23 13:39:35.136573 | orchestrator | 2025-03-23 13:39:35 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:35.138130 | orchestrator | 2025-03-23 13:39:35 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:35.138559 | orchestrator | 2025-03-23 13:39:35 | INFO  | Task 067adc97-47bc-420e-95c2-7767430206df is in state STARTED 2025-03-23 13:39:38.185293 | orchestrator | 2025-03-23 13:39:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:38.185423 | orchestrator | 2025-03-23 13:39:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:38.186760 | orchestrator | 2025-03-23 13:39:38 | INFO  | Task abaf4596-b4d4-434e-8d60-129346b1392e is in state SUCCESS 2025-03-23 13:39:38.189448 | orchestrator | 2025-03-23 13:39:38.189483 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:39:38.189499 | orchestrator | 2025-03-23 13:39:38.189596 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-03-23 13:39:38.189615 | orchestrator | 2025-03-23 13:39:38.189629 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-23 13:39:38.189688 | orchestrator | Sunday 23 March 2025 13:39:03 +0000 (0:00:00.482) 0:00:00.482 ********** 2025-03-23 13:39:38.189704 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-03-23 13:39:38.189719 | orchestrator | 2025-03-23 13:39:38.189733 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-23 13:39:38.189748 | orchestrator | Sunday 23 March 2025 13:39:03 +0000 (0:00:00.225) 0:00:00.708 ********** 2025-03-23 13:39:38.189763 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.189777 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-03-23 13:39:38.189791 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-03-23 13:39:38.189804 | orchestrator | 2025-03-23 13:39:38.189818 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-23 13:39:38.189832 | orchestrator | Sunday 23 March 2025 13:39:05 +0000 (0:00:01.051) 0:00:01.759 ********** 2025-03-23 13:39:38.189846 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-03-23 13:39:38.189860 | orchestrator | 2025-03-23 13:39:38.189874 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-23 13:39:38.189887 | orchestrator | Sunday 23 March 2025 13:39:05 +0000 (0:00:00.286) 0:00:02.046 ********** 2025-03-23 13:39:38.189901 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.189916 | orchestrator | 2025-03-23 13:39:38.189930 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-23 13:39:38.189944 | orchestrator | Sunday 23 March 2025 13:39:06 +0000 (0:00:00.684) 0:00:02.730 ********** 2025-03-23 13:39:38.189958 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.189972 | orchestrator | 2025-03-23 13:39:38.189986 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-23 13:39:38.189999 | orchestrator | Sunday 23 March 2025 13:39:06 +0000 (0:00:00.194) 0:00:02.925 ********** 2025-03-23 13:39:38.190013 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190072 | orchestrator | 2025-03-23 13:39:38.190086 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-23 13:39:38.190100 | orchestrator | Sunday 23 March 2025 13:39:06 +0000 (0:00:00.512) 0:00:03.437 ********** 2025-03-23 13:39:38.190114 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190127 | orchestrator | 2025-03-23 13:39:38.190141 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-23 13:39:38.190155 | orchestrator | Sunday 23 March 2025 13:39:06 +0000 (0:00:00.175) 0:00:03.612 ********** 2025-03-23 13:39:38.190191 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190205 | orchestrator | 2025-03-23 13:39:38.190219 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-23 13:39:38.190235 | orchestrator | Sunday 23 March 2025 13:39:07 +0000 (0:00:00.163) 0:00:03.776 ********** 2025-03-23 13:39:38.190250 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190265 | orchestrator | 2025-03-23 13:39:38.190281 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-23 13:39:38.190296 | orchestrator | Sunday 23 March 2025 13:39:07 +0000 (0:00:00.164) 0:00:03.941 ********** 2025-03-23 13:39:38.190311 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.190327 | orchestrator | 2025-03-23 13:39:38.190342 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-23 13:39:38.190450 | orchestrator | Sunday 23 March 2025 13:39:07 +0000 (0:00:00.150) 0:00:04.092 ********** 2025-03-23 13:39:38.190468 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190484 | orchestrator | 2025-03-23 13:39:38.190499 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-23 13:39:38.190514 | orchestrator | Sunday 23 March 2025 13:39:07 +0000 (0:00:00.330) 0:00:04.423 ********** 2025-03-23 13:39:38.190530 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.190545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:39:38.190561 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:39:38.190576 | orchestrator | 2025-03-23 13:39:38.190590 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-23 13:39:38.190604 | orchestrator | Sunday 23 March 2025 13:39:08 +0000 (0:00:00.728) 0:00:05.151 ********** 2025-03-23 13:39:38.190618 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.190633 | orchestrator | 2025-03-23 13:39:38.190665 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-23 13:39:38.190680 | orchestrator | Sunday 23 March 2025 13:39:08 +0000 (0:00:00.264) 0:00:05.416 ********** 2025-03-23 13:39:38.190694 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.190708 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:39:38.190728 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:39:38.190743 | orchestrator | 2025-03-23 13:39:38.190757 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-23 13:39:38.190770 | orchestrator | Sunday 23 March 2025 13:39:11 +0000 (0:00:02.353) 0:00:07.769 ********** 2025-03-23 13:39:38.190784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:39:38.190799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:39:38.190813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:39:38.190827 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.190841 | orchestrator | 2025-03-23 13:39:38.190855 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-23 13:39:38.190881 | orchestrator | Sunday 23 March 2025 13:39:11 +0000 (0:00:00.483) 0:00:08.252 ********** 2025-03-23 13:39:38.190901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.190918 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.190932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.190956 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.190971 | orchestrator | 2025-03-23 13:39:38.190985 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-23 13:39:38.190998 | orchestrator | Sunday 23 March 2025 13:39:12 +0000 (0:00:01.022) 0:00:09.275 ********** 2025-03-23 13:39:38.191014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.191029 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.191044 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-23 13:39:38.191058 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191072 | orchestrator | 2025-03-23 13:39:38.191086 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-23 13:39:38.191100 | orchestrator | Sunday 23 March 2025 13:39:12 +0000 (0:00:00.210) 0:00:09.486 ********** 2025-03-23 13:39:38.191119 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '7c817de588a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-23 13:39:09.488945', 'end': '2025-03-23 13:39:09.530058', 'delta': '0:00:00.041113', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7c817de588a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-03-23 13:39:38.191138 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'fc904a968a1f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-23 13:39:10.147806', 'end': '2025-03-23 13:39:10.198614', 'delta': '0:00:00.050808', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc904a968a1f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-03-23 13:39:38.191162 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '6f2d0ad80043', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-23 13:39:10.835527', 'end': '2025-03-23 13:39:10.878675', 'delta': '0:00:00.043148', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6f2d0ad80043'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-03-23 13:39:38.191183 | orchestrator | 2025-03-23 13:39:38.191198 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-23 13:39:38.191212 | orchestrator | Sunday 23 March 2025 13:39:13 +0000 (0:00:00.228) 0:00:09.715 ********** 2025-03-23 13:39:38.191226 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.191240 | orchestrator | 2025-03-23 13:39:38.191253 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-23 13:39:38.191267 | orchestrator | Sunday 23 March 2025 13:39:13 +0000 (0:00:00.764) 0:00:10.479 ********** 2025-03-23 13:39:38.191281 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-03-23 13:39:38.191295 | orchestrator | 2025-03-23 13:39:38.191309 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-23 13:39:38.191323 | orchestrator | Sunday 23 March 2025 13:39:15 +0000 (0:00:01.399) 0:00:11.878 ********** 2025-03-23 13:39:38.191337 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191350 | orchestrator | 2025-03-23 13:39:38.191364 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-23 13:39:38.191378 | orchestrator | Sunday 23 March 2025 13:39:15 +0000 (0:00:00.154) 0:00:12.033 ********** 2025-03-23 13:39:38.191392 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191406 | orchestrator | 2025-03-23 13:39:38.191420 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:39:38.191434 | orchestrator | Sunday 23 March 2025 13:39:15 +0000 (0:00:00.244) 0:00:12.277 ********** 2025-03-23 13:39:38.191448 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191462 | orchestrator | 2025-03-23 13:39:38.191475 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-23 13:39:38.191489 | orchestrator | Sunday 23 March 2025 13:39:15 +0000 (0:00:00.152) 0:00:12.429 ********** 2025-03-23 13:39:38.191503 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.191517 | orchestrator | 2025-03-23 13:39:38.191531 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-23 13:39:38.191545 | orchestrator | Sunday 23 March 2025 13:39:15 +0000 (0:00:00.159) 0:00:12.589 ********** 2025-03-23 13:39:38.191558 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191572 | orchestrator | 2025-03-23 13:39:38.191586 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-23 13:39:38.191600 | orchestrator | Sunday 23 March 2025 13:39:16 +0000 (0:00:00.300) 0:00:12.889 ********** 2025-03-23 13:39:38.191614 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191628 | orchestrator | 2025-03-23 13:39:38.191656 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-23 13:39:38.191671 | orchestrator | Sunday 23 March 2025 13:39:16 +0000 (0:00:00.155) 0:00:13.044 ********** 2025-03-23 13:39:38.191685 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191707 | orchestrator | 2025-03-23 13:39:38.191722 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-23 13:39:38.191736 | orchestrator | Sunday 23 March 2025 13:39:16 +0000 (0:00:00.170) 0:00:13.215 ********** 2025-03-23 13:39:38.191751 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191766 | orchestrator | 2025-03-23 13:39:38.191780 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-23 13:39:38.191794 | orchestrator | Sunday 23 March 2025 13:39:16 +0000 (0:00:00.137) 0:00:13.353 ********** 2025-03-23 13:39:38.191809 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191823 | orchestrator | 2025-03-23 13:39:38.191837 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-23 13:39:38.191865 | orchestrator | Sunday 23 March 2025 13:39:17 +0000 (0:00:00.381) 0:00:13.734 ********** 2025-03-23 13:39:38.191880 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191894 | orchestrator | 2025-03-23 13:39:38.191908 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-23 13:39:38.191928 | orchestrator | Sunday 23 March 2025 13:39:17 +0000 (0:00:00.157) 0:00:13.892 ********** 2025-03-23 13:39:38.191943 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.191956 | orchestrator | 2025-03-23 13:39:38.191970 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-23 13:39:38.191984 | orchestrator | Sunday 23 March 2025 13:39:17 +0000 (0:00:00.156) 0:00:14.048 ********** 2025-03-23 13:39:38.191998 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192012 | orchestrator | 2025-03-23 13:39:38.192025 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-23 13:39:38.192039 | orchestrator | Sunday 23 March 2025 13:39:17 +0000 (0:00:00.137) 0:00:14.185 ********** 2025-03-23 13:39:38.192053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-23 13:39:38.192200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_b604898d-e561-495d-9e07-a30ca177e2ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:39:38.192219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2a7b2e0f-187f-479d-baca-3c89b3a54e1f', 'scsi-SQEMU_QEMU_HARDDISK_2a7b2e0f-187f-479d-baca-3c89b3a54e1f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:39:38.192236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_98af1d1a-144c-4faa-87cf-25faeb3fb806', 'scsi-SQEMU_QEMU_HARDDISK_98af1d1a-144c-4faa-87cf-25faeb3fb806'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:39:38.192251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d22c86c-8b28-4de1-9381-02b0bcd9097d', 'scsi-SQEMU_QEMU_HARDDISK_6d22c86c-8b28-4de1-9381-02b0bcd9097d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:39:38.192272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-23-12-37-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-23 13:39:38.192288 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192302 | orchestrator | 2025-03-23 13:39:38.192316 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-23 13:39:38.192330 | orchestrator | Sunday 23 March 2025 13:39:17 +0000 (0:00:00.328) 0:00:14.514 ********** 2025-03-23 13:39:38.192344 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192357 | orchestrator | 2025-03-23 13:39:38.192371 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-23 13:39:38.192385 | orchestrator | Sunday 23 March 2025 13:39:18 +0000 (0:00:00.260) 0:00:14.774 ********** 2025-03-23 13:39:38.192399 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192412 | orchestrator | 2025-03-23 13:39:38.192426 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-23 13:39:38.192440 | orchestrator | Sunday 23 March 2025 13:39:18 +0000 (0:00:00.165) 0:00:14.940 ********** 2025-03-23 13:39:38.192454 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192467 | orchestrator | 2025-03-23 13:39:38.192481 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-23 13:39:38.192495 | orchestrator | Sunday 23 March 2025 13:39:18 +0000 (0:00:00.152) 0:00:15.092 ********** 2025-03-23 13:39:38.192514 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.192529 | orchestrator | 2025-03-23 13:39:38.192542 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-23 13:39:38.192556 | orchestrator | Sunday 23 March 2025 13:39:18 +0000 (0:00:00.522) 0:00:15.614 ********** 2025-03-23 13:39:38.192570 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.192584 | orchestrator | 2025-03-23 13:39:38.192597 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:39:38.192611 | orchestrator | Sunday 23 March 2025 13:39:19 +0000 (0:00:00.166) 0:00:15.781 ********** 2025-03-23 13:39:38.192625 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.192639 | orchestrator | 2025-03-23 13:39:38.192712 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:39:38.192727 | orchestrator | Sunday 23 March 2025 13:39:19 +0000 (0:00:00.814) 0:00:16.595 ********** 2025-03-23 13:39:38.192741 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.192755 | orchestrator | 2025-03-23 13:39:38.192769 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-23 13:39:38.192783 | orchestrator | Sunday 23 March 2025 13:39:20 +0000 (0:00:00.158) 0:00:16.754 ********** 2025-03-23 13:39:38.192796 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192810 | orchestrator | 2025-03-23 13:39:38.192824 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-23 13:39:38.192838 | orchestrator | Sunday 23 March 2025 13:39:20 +0000 (0:00:00.271) 0:00:17.026 ********** 2025-03-23 13:39:38.192852 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192865 | orchestrator | 2025-03-23 13:39:38.192879 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-23 13:39:38.192893 | orchestrator | Sunday 23 March 2025 13:39:20 +0000 (0:00:00.191) 0:00:17.217 ********** 2025-03-23 13:39:38.192907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:39:38.192929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:39:38.192943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:39:38.192957 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.192970 | orchestrator | 2025-03-23 13:39:38.192984 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-23 13:39:38.192998 | orchestrator | Sunday 23 March 2025 13:39:21 +0000 (0:00:00.518) 0:00:17.735 ********** 2025-03-23 13:39:38.193012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:39:38.193026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:39:38.193039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:39:38.193053 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.193067 | orchestrator | 2025-03-23 13:39:38.193080 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-23 13:39:38.193094 | orchestrator | Sunday 23 March 2025 13:39:21 +0000 (0:00:00.514) 0:00:18.250 ********** 2025-03-23 13:39:38.193108 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.193122 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-23 13:39:38.193136 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-23 13:39:38.193148 | orchestrator | 2025-03-23 13:39:38.193160 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-23 13:39:38.193177 | orchestrator | Sunday 23 March 2025 13:39:22 +0000 (0:00:01.274) 0:00:19.524 ********** 2025-03-23 13:39:38.193190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:39:38.193202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:39:38.193214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:39:38.193226 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.193239 | orchestrator | 2025-03-23 13:39:38.193251 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-23 13:39:38.193263 | orchestrator | Sunday 23 March 2025 13:39:23 +0000 (0:00:00.236) 0:00:19.761 ********** 2025-03-23 13:39:38.193276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-23 13:39:38.193288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-23 13:39:38.193300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-23 13:39:38.193313 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.193325 | orchestrator | 2025-03-23 13:39:38.193337 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-23 13:39:38.193350 | orchestrator | Sunday 23 March 2025 13:39:23 +0000 (0:00:00.230) 0:00:19.991 ********** 2025-03-23 13:39:38.193362 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-03-23 13:39:38.193374 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-23 13:39:38.193387 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-23 13:39:38.193399 | orchestrator | 2025-03-23 13:39:38.193412 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-23 13:39:38.193424 | orchestrator | Sunday 23 March 2025 13:39:23 +0000 (0:00:00.213) 0:00:20.204 ********** 2025-03-23 13:39:38.193436 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.193448 | orchestrator | 2025-03-23 13:39:38.193460 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-23 13:39:38.193473 | orchestrator | Sunday 23 March 2025 13:39:23 +0000 (0:00:00.371) 0:00:20.576 ********** 2025-03-23 13:39:38.193485 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:39:38.193498 | orchestrator | 2025-03-23 13:39:38.193510 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-23 13:39:38.193522 | orchestrator | Sunday 23 March 2025 13:39:24 +0000 (0:00:00.155) 0:00:20.732 ********** 2025-03-23 13:39:38.193541 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.193559 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:39:38.193572 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:39:38.193584 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-23 13:39:38.193596 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:39:38.193608 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:39:38.193621 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:39:38.193633 | orchestrator | 2025-03-23 13:39:38.193663 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-23 13:39:38.193676 | orchestrator | Sunday 23 March 2025 13:39:24 +0000 (0:00:00.928) 0:00:21.660 ********** 2025-03-23 13:39:38.193688 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 13:39:38.193701 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-23 13:39:38.193713 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-23 13:39:38.193725 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-23 13:39:38.193738 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-23 13:39:38.193750 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-23 13:39:38.193762 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-23 13:39:38.193774 | orchestrator | 2025-03-23 13:39:38.193787 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-03-23 13:39:38.193799 | orchestrator | Sunday 23 March 2025 13:39:26 +0000 (0:00:01.729) 0:00:23.390 ********** 2025-03-23 13:39:38.193811 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:39:38.193824 | orchestrator | 2025-03-23 13:39:38.193836 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-03-23 13:39:38.193849 | orchestrator | Sunday 23 March 2025 13:39:27 +0000 (0:00:00.480) 0:00:23.870 ********** 2025-03-23 13:39:38.193861 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:39:38.193873 | orchestrator | 2025-03-23 13:39:38.193886 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-03-23 13:39:38.193898 | orchestrator | Sunday 23 March 2025 13:39:27 +0000 (0:00:00.710) 0:00:24.580 ********** 2025-03-23 13:39:38.193911 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-03-23 13:39:38.193936 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-03-23 13:39:38.193948 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-03-23 13:39:38.193961 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-03-23 13:39:38.193973 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-03-23 13:39:38.193985 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-03-23 13:39:38.193998 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-03-23 13:39:38.194010 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-03-23 13:39:38.194052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-03-23 13:39:38.194065 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-03-23 13:39:38.194077 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-03-23 13:39:38.194096 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-03-23 13:39:38.194108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-03-23 13:39:38.194121 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-03-23 13:39:38.194133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-03-23 13:39:38.194145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-03-23 13:39:38.194157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-03-23 13:39:38.194169 | orchestrator | 2025-03-23 13:39:38.194181 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:39:38.194193 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-23 13:39:38.194206 | orchestrator | 2025-03-23 13:39:38.194219 | orchestrator | 2025-03-23 13:39:38.194231 | orchestrator | 2025-03-23 13:39:38.194243 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:39:38.194255 | orchestrator | Sunday 23 March 2025 13:39:34 +0000 (0:00:06.953) 0:00:31.534 ********** 2025-03-23 13:39:38.194267 | orchestrator | =============================================================================== 2025-03-23 13:39:38.194279 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.95s 2025-03-23 13:39:38.194292 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.35s 2025-03-23 13:39:38.194304 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.73s 2025-03-23 13:39:38.194322 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.40s 2025-03-23 13:39:41.246173 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.27s 2025-03-23 13:39:41.246281 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.05s 2025-03-23 13:39:41.246299 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 1.02s 2025-03-23 13:39:41.246313 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.93s 2025-03-23 13:39:41.246327 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.81s 2025-03-23 13:39:41.246340 | orchestrator | ceph-facts : set_fact _container_exec_cmd ------------------------------- 0.76s 2025-03-23 13:39:41.246352 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-03-23 13:39:41.246365 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.71s 2025-03-23 13:39:41.246378 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.68s 2025-03-23 13:39:41.246407 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.52s 2025-03-23 13:39:41.246421 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.52s 2025-03-23 13:39:41.246434 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.51s 2025-03-23 13:39:41.246447 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.51s 2025-03-23 13:39:41.246460 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.48s 2025-03-23 13:39:41.246473 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.48s 2025-03-23 13:39:41.246485 | orchestrator | ceph-facts : resolve dedicated_device link(s) --------------------------- 0.38s 2025-03-23 13:39:41.246499 | orchestrator | 2025-03-23 13:39:38 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state STARTED 2025-03-23 13:39:41.246513 | orchestrator | 2025-03-23 13:39:38 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:41.246526 | orchestrator | 2025-03-23 13:39:38 | INFO  | Task 067adc97-47bc-420e-95c2-7767430206df is in state SUCCESS 2025-03-23 13:39:41.246560 | orchestrator | 2025-03-23 13:39:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:41.246592 | orchestrator | 2025-03-23 13:39:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:41.247554 | orchestrator | 2025-03-23 13:39:41 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:41.248627 | orchestrator | 2025-03-23 13:39:41 | INFO  | Task 96d358bb-6e84-452e-9206-bfa83f78a355 is in state SUCCESS 2025-03-23 13:39:41.250074 | orchestrator | 2025-03-23 13:39:41 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:41.250181 | orchestrator | 2025-03-23 13:39:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:44.301307 | orchestrator | 2025-03-23 13:39:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:44.301931 | orchestrator | 2025-03-23 13:39:44 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:44.305752 | orchestrator | 2025-03-23 13:39:44 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:47.347774 | orchestrator | 2025-03-23 13:39:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:47.347897 | orchestrator | 2025-03-23 13:39:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:47.351215 | orchestrator | 2025-03-23 13:39:47 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:47.354134 | orchestrator | 2025-03-23 13:39:47 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:50.411162 | orchestrator | 2025-03-23 13:39:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:50.411303 | orchestrator | 2025-03-23 13:39:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:50.412203 | orchestrator | 2025-03-23 13:39:50 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:50.414211 | orchestrator | 2025-03-23 13:39:50 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:53.470379 | orchestrator | 2025-03-23 13:39:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:53.470620 | orchestrator | 2025-03-23 13:39:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:53.470720 | orchestrator | 2025-03-23 13:39:53 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:53.472911 | orchestrator | 2025-03-23 13:39:53 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:56.515852 | orchestrator | 2025-03-23 13:39:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:56.516020 | orchestrator | 2025-03-23 13:39:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:56.516512 | orchestrator | 2025-03-23 13:39:56 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:56.518467 | orchestrator | 2025-03-23 13:39:56 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:39:56.518600 | orchestrator | 2025-03-23 13:39:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:39:59.577123 | orchestrator | 2025-03-23 13:39:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:39:59.581111 | orchestrator | 2025-03-23 13:39:59 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:39:59.598317 | orchestrator | 2025-03-23 13:39:59 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:40:02.636247 | orchestrator | 2025-03-23 13:39:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:02.636385 | orchestrator | 2025-03-23 13:40:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:02.637607 | orchestrator | 2025-03-23 13:40:02 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:02.639837 | orchestrator | 2025-03-23 13:40:02 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:40:05.689955 | orchestrator | 2025-03-23 13:40:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:05.690069 | orchestrator | 2025-03-23 13:40:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:05.691215 | orchestrator | 2025-03-23 13:40:05 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:05.692735 | orchestrator | 2025-03-23 13:40:05 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:40:08.738380 | orchestrator | 2025-03-23 13:40:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:08.738511 | orchestrator | 2025-03-23 13:40:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:08.739757 | orchestrator | 2025-03-23 13:40:08 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:08.741373 | orchestrator | 2025-03-23 13:40:08 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state STARTED 2025-03-23 13:40:11.801824 | orchestrator | 2025-03-23 13:40:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:11.801954 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:11.802912 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:11.802955 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:11.804028 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:11.805008 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:11.806171 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:11.808107 | orchestrator | 2025-03-23 13:40:11 | INFO  | Task 4d64cbe0-d2bd-4a45-96d5-395bce5fecd8 is in state SUCCESS 2025-03-23 13:40:11.808371 | orchestrator | 2025-03-23 13:40:11.808399 | orchestrator | None 2025-03-23 13:40:11.808414 | orchestrator | 2025-03-23 13:40:11.808429 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-03-23 13:40:11.808458 | orchestrator | 2025-03-23 13:40:11.808473 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-03-23 13:40:11.808488 | orchestrator | Sunday 23 March 2025 13:38:53 +0000 (0:00:00.153) 0:00:00.153 ********** 2025-03-23 13:40:11.808502 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-03-23 13:40:11.808516 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808530 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808544 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-03-23 13:40:11.808558 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808572 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-03-23 13:40:11.808614 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-03-23 13:40:11.808629 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-03-23 13:40:11.808643 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-03-23 13:40:11.808677 | orchestrator | 2025-03-23 13:40:11.808692 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-03-23 13:40:11.808783 | orchestrator | Sunday 23 March 2025 13:38:56 +0000 (0:00:03.259) 0:00:03.413 ********** 2025-03-23 13:40:11.808799 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-03-23 13:40:11.808813 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808826 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808840 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-03-23 13:40:11.808854 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-23 13:40:11.808868 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-03-23 13:40:11.808881 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-03-23 13:40:11.808895 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-03-23 13:40:11.808909 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-03-23 13:40:11.808923 | orchestrator | 2025-03-23 13:40:11.808937 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-03-23 13:40:11.808951 | orchestrator | Sunday 23 March 2025 13:38:57 +0000 (0:00:00.258) 0:00:03.671 ********** 2025-03-23 13:40:11.808964 | orchestrator | ok: [testbed-manager] => { 2025-03-23 13:40:11.808981 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-03-23 13:40:11.808995 | orchestrator | } 2025-03-23 13:40:11.809010 | orchestrator | 2025-03-23 13:40:11.809117 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-03-23 13:40:11.809139 | orchestrator | Sunday 23 March 2025 13:38:57 +0000 (0:00:00.190) 0:00:03.862 ********** 2025-03-23 13:40:11.809154 | orchestrator | changed: [testbed-manager] 2025-03-23 13:40:11.809168 | orchestrator | 2025-03-23 13:40:11.809183 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-03-23 13:40:11.809197 | orchestrator | Sunday 23 March 2025 13:39:35 +0000 (0:00:38.489) 0:00:42.352 ********** 2025-03-23 13:40:11.809211 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-03-23 13:40:11.809225 | orchestrator | 2025-03-23 13:40:11.809239 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-03-23 13:40:11.809254 | orchestrator | Sunday 23 March 2025 13:39:36 +0000 (0:00:00.499) 0:00:42.852 ********** 2025-03-23 13:40:11.809365 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-03-23 13:40:11.809383 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-03-23 13:40:11.809397 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-03-23 13:40:11.809412 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-03-23 13:40:11.809439 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-03-23 13:40:11.809465 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-03-23 13:40:11.810539 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-03-23 13:40:11.810573 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-03-23 13:40:11.810589 | orchestrator | 2025-03-23 13:40:11.810604 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-03-23 13:40:11.810620 | orchestrator | Sunday 23 March 2025 13:39:39 +0000 (0:00:03.211) 0:00:46.063 ********** 2025-03-23 13:40:11.810635 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:40:11.810672 | orchestrator | 2025-03-23 13:40:11.810687 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:40:11.810702 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:40:11.810716 | orchestrator | 2025-03-23 13:40:11.810730 | orchestrator | Sunday 23 March 2025 13:39:39 +0000 (0:00:00.027) 0:00:46.091 ********** 2025-03-23 13:40:11.810744 | orchestrator | =============================================================================== 2025-03-23 13:40:11.810758 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 38.49s 2025-03-23 13:40:11.810772 | orchestrator | Check ceph keys --------------------------------------------------------- 3.26s 2025-03-23 13:40:11.810786 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.21s 2025-03-23 13:40:11.810800 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.50s 2025-03-23 13:40:11.810814 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-03-23 13:40:11.810828 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.19s 2025-03-23 13:40:11.810842 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-03-23 13:40:11.810856 | orchestrator | 2025-03-23 13:40:11.810906 | orchestrator | 2025-03-23 13:40:11.810922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:40:11.810936 | orchestrator | 2025-03-23 13:40:11.810950 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:40:11.810964 | orchestrator | Sunday 23 March 2025 13:37:19 +0000 (0:00:00.393) 0:00:00.393 ********** 2025-03-23 13:40:11.810978 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.810993 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.811007 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.811021 | orchestrator | 2025-03-23 13:40:11.811035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:40:11.811049 | orchestrator | Sunday 23 March 2025 13:37:20 +0000 (0:00:00.451) 0:00:00.844 ********** 2025-03-23 13:40:11.811063 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-03-23 13:40:11.811077 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-03-23 13:40:11.811091 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-03-23 13:40:11.811105 | orchestrator | 2025-03-23 13:40:11.811119 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-03-23 13:40:11.811133 | orchestrator | 2025-03-23 13:40:11.811149 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:11.811165 | orchestrator | Sunday 23 March 2025 13:37:20 +0000 (0:00:00.379) 0:00:01.223 ********** 2025-03-23 13:40:11.811181 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:40:11.811210 | orchestrator | 2025-03-23 13:40:11.811226 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-03-23 13:40:11.811241 | orchestrator | Sunday 23 March 2025 13:37:21 +0000 (0:00:00.902) 0:00:02.126 ********** 2025-03-23 13:40:11.811261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811462 | orchestrator | 2025-03-23 13:40:11.811478 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-03-23 13:40:11.811499 | orchestrator | Sunday 23 March 2025 13:37:23 +0000 (0:00:02.235) 0:00:04.362 ********** 2025-03-23 13:40:11.811514 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-03-23 13:40:11.811528 | orchestrator | 2025-03-23 13:40:11.811542 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-03-23 13:40:11.811556 | orchestrator | Sunday 23 March 2025 13:37:24 +0000 (0:00:00.626) 0:00:04.989 ********** 2025-03-23 13:40:11.811570 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.811592 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.811606 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.811620 | orchestrator | 2025-03-23 13:40:11.811642 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-03-23 13:40:11.811689 | orchestrator | Sunday 23 March 2025 13:37:24 +0000 (0:00:00.531) 0:00:05.520 ********** 2025-03-23 13:40:11.811704 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:40:11.811718 | orchestrator | 2025-03-23 13:40:11.811732 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:11.811745 | orchestrator | Sunday 23 March 2025 13:37:25 +0000 (0:00:00.463) 0:00:05.984 ********** 2025-03-23 13:40:11.811764 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:40:11.811779 | orchestrator | 2025-03-23 13:40:11.811793 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-03-23 13:40:11.811854 | orchestrator | Sunday 23 March 2025 13:37:25 +0000 (0:00:00.711) 0:00:06.695 ********** 2025-03-23 13:40:11.811869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.811941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.811985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812036 | orchestrator | 2025-03-23 13:40:11.812050 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-03-23 13:40:11.812065 | orchestrator | Sunday 23 March 2025 13:37:29 +0000 (0:00:03.536) 0:00:10.232 ********** 2025-03-23 13:40:11.812086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812132 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.812147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812207 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.812223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812267 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.812282 | orchestrator | 2025-03-23 13:40:11.812296 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-03-23 13:40:11.812310 | orchestrator | Sunday 23 March 2025 13:37:30 +0000 (0:00:00.872) 0:00:11.104 ********** 2025-03-23 13:40:11.812325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812383 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.812398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812462 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.812484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-23 13:40:11.812500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 13:40:11.812529 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.812543 | orchestrator | 2025-03-23 13:40:11.812557 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-03-23 13:40:11.812571 | orchestrator | Sunday 23 March 2025 13:37:31 +0000 (0:00:01.149) 0:00:12.254 ********** 2025-03-23 13:40:11.812586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812767 | orchestrator | 2025-03-23 13:40:11.812782 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-03-23 13:40:11.812796 | orchestrator | Sunday 23 March 2025 13:37:35 +0000 (0:00:03.631) 0:00:15.885 ********** 2025-03-23 13:40:11.812810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.812899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.812914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.812966 | orchestrator | 2025-03-23 13:40:11.812980 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-03-23 13:40:11.812995 | orchestrator | Sunday 23 March 2025 13:37:42 +0000 (0:00:07.882) 0:00:23.768 ********** 2025-03-23 13:40:11.813009 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.813024 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:40:11.813037 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:40:11.813051 | orchestrator | 2025-03-23 13:40:11.813066 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-03-23 13:40:11.813080 | orchestrator | Sunday 23 March 2025 13:37:45 +0000 (0:00:02.692) 0:00:26.460 ********** 2025-03-23 13:40:11.813094 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.813108 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.813122 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.813136 | orchestrator | 2025-03-23 13:40:11.813154 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-03-23 13:40:11.813169 | orchestrator | Sunday 23 March 2025 13:37:46 +0000 (0:00:01.237) 0:00:27.697 ********** 2025-03-23 13:40:11.813183 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.813197 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.813211 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.813225 | orchestrator | 2025-03-23 13:40:11.813239 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-03-23 13:40:11.813253 | orchestrator | Sunday 23 March 2025 13:37:47 +0000 (0:00:00.549) 0:00:28.246 ********** 2025-03-23 13:40:11.813267 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.813281 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.813301 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.813315 | orchestrator | 2025-03-23 13:40:11.813328 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-03-23 13:40:11.813342 | orchestrator | Sunday 23 March 2025 13:37:47 +0000 (0:00:00.464) 0:00:28.711 ********** 2025-03-23 13:40:11.813357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.813379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.813394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.813410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.813432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.813447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 13:40:11.813468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.813483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.813497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.813512 | orchestrator | 2025-03-23 13:40:11.813526 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:11.813540 | orchestrator | Sunday 23 March 2025 13:37:50 +0000 (0:00:02.810) 0:00:31.521 ********** 2025-03-23 13:40:11.813554 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.813568 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.813582 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.813596 | orchestrator | 2025-03-23 13:40:11.813610 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-03-23 13:40:11.813624 | orchestrator | Sunday 23 March 2025 13:37:51 +0000 (0:00:00.338) 0:00:31.860 ********** 2025-03-23 13:40:11.813638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-23 13:40:11.813700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-23 13:40:11.813721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-23 13:40:11.813737 | orchestrator | 2025-03-23 13:40:11.813751 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-03-23 13:40:11.813765 | orchestrator | Sunday 23 March 2025 13:37:53 +0000 (0:00:02.075) 0:00:33.935 ********** 2025-03-23 13:40:11.813779 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:40:11.813793 | orchestrator | 2025-03-23 13:40:11.813807 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-03-23 13:40:11.813821 | orchestrator | Sunday 23 March 2025 13:37:53 +0000 (0:00:00.737) 0:00:34.672 ********** 2025-03-23 13:40:11.813835 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.813849 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.813863 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.813885 | orchestrator | 2025-03-23 13:40:11.813906 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-03-23 13:40:11.813921 | orchestrator | Sunday 23 March 2025 13:37:55 +0000 (0:00:01.248) 0:00:35.921 ********** 2025-03-23 13:40:11.813935 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-23 13:40:11.813948 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:40:11.813962 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-23 13:40:11.813976 | orchestrator | 2025-03-23 13:40:11.813990 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-03-23 13:40:11.814004 | orchestrator | Sunday 23 March 2025 13:37:56 +0000 (0:00:01.303) 0:00:37.225 ********** 2025-03-23 13:40:11.814072 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.814090 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.814104 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.814118 | orchestrator | 2025-03-23 13:40:11.814132 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-03-23 13:40:11.814146 | orchestrator | Sunday 23 March 2025 13:37:56 +0000 (0:00:00.353) 0:00:37.578 ********** 2025-03-23 13:40:11.814160 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-23 13:40:11.814174 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-23 13:40:11.814187 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-23 13:40:11.814201 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-23 13:40:11.814215 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-23 13:40:11.814229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-23 13:40:11.814243 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-23 13:40:11.814257 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-23 13:40:11.814271 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-23 13:40:11.814285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-23 13:40:11.814299 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-23 13:40:11.814312 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-23 13:40:11.814326 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-23 13:40:11.814340 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-23 13:40:11.814357 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-23 13:40:11.814372 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:40:11.814386 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:40:11.814401 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:40:11.814415 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:40:11.814429 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:40:11.814443 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:40:11.814456 | orchestrator | 2025-03-23 13:40:11.814471 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-03-23 13:40:11.814485 | orchestrator | Sunday 23 March 2025 13:38:10 +0000 (0:00:13.961) 0:00:51.540 ********** 2025-03-23 13:40:11.814507 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:40:11.814521 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:40:11.814534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:40:11.814548 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:40:11.814562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:40:11.814583 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:40:11.814597 | orchestrator | 2025-03-23 13:40:11.814611 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-03-23 13:40:11.814625 | orchestrator | Sunday 23 March 2025 13:38:14 +0000 (0:00:03.686) 0:00:55.226 ********** 2025-03-23 13:40:11.814640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.814706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.814723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-23 13:40:11.814751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-23 13:40:11.814840 | orchestrator | 2025-03-23 13:40:11.814853 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:11.814874 | orchestrator | Sunday 23 March 2025 13:38:17 +0000 (0:00:03.168) 0:00:58.395 ********** 2025-03-23 13:40:11.814887 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.814900 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.814912 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.814924 | orchestrator | 2025-03-23 13:40:11.814936 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-03-23 13:40:11.814949 | orchestrator | Sunday 23 March 2025 13:38:18 +0000 (0:00:00.423) 0:00:58.819 ********** 2025-03-23 13:40:11.814961 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.814973 | orchestrator | 2025-03-23 13:40:11.814986 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-03-23 13:40:11.814998 | orchestrator | Sunday 23 March 2025 13:38:20 +0000 (0:00:02.803) 0:01:01.623 ********** 2025-03-23 13:40:11.815010 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815022 | orchestrator | 2025-03-23 13:40:11.815035 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-03-23 13:40:11.815047 | orchestrator | Sunday 23 March 2025 13:38:23 +0000 (0:00:02.556) 0:01:04.179 ********** 2025-03-23 13:40:11.815059 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.815072 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.815084 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.815096 | orchestrator | 2025-03-23 13:40:11.815109 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-03-23 13:40:11.815126 | orchestrator | Sunday 23 March 2025 13:38:24 +0000 (0:00:01.375) 0:01:05.555 ********** 2025-03-23 13:40:11.815139 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.815156 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.815169 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.815181 | orchestrator | 2025-03-23 13:40:11.815194 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-03-23 13:40:11.815206 | orchestrator | Sunday 23 March 2025 13:38:25 +0000 (0:00:00.418) 0:01:05.974 ********** 2025-03-23 13:40:11.815218 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.815231 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:11.815243 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:11.815255 | orchestrator | 2025-03-23 13:40:11.815267 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-03-23 13:40:11.815280 | orchestrator | Sunday 23 March 2025 13:38:25 +0000 (0:00:00.686) 0:01:06.660 ********** 2025-03-23 13:40:11.815292 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815304 | orchestrator | 2025-03-23 13:40:11.815316 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-03-23 13:40:11.815328 | orchestrator | Sunday 23 March 2025 13:38:39 +0000 (0:00:13.553) 0:01:20.214 ********** 2025-03-23 13:40:11.815341 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815353 | orchestrator | 2025-03-23 13:40:11.815365 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-23 13:40:11.815378 | orchestrator | Sunday 23 March 2025 13:38:49 +0000 (0:00:10.281) 0:01:30.496 ********** 2025-03-23 13:40:11.815390 | orchestrator | 2025-03-23 13:40:11.815402 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-23 13:40:11.815414 | orchestrator | Sunday 23 March 2025 13:38:49 +0000 (0:00:00.069) 0:01:30.565 ********** 2025-03-23 13:40:11.815426 | orchestrator | 2025-03-23 13:40:11.815439 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-23 13:40:11.815451 | orchestrator | Sunday 23 March 2025 13:38:49 +0000 (0:00:00.061) 0:01:30.627 ********** 2025-03-23 13:40:11.815463 | orchestrator | 2025-03-23 13:40:11.815476 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-03-23 13:40:11.815488 | orchestrator | Sunday 23 March 2025 13:38:49 +0000 (0:00:00.064) 0:01:30.691 ********** 2025-03-23 13:40:11.815500 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815512 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:40:11.815532 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:40:11.815544 | orchestrator | 2025-03-23 13:40:11.815556 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-03-23 13:40:11.815569 | orchestrator | Sunday 23 March 2025 13:39:00 +0000 (0:00:10.343) 0:01:41.034 ********** 2025-03-23 13:40:11.815581 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815593 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:40:11.815605 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:40:11.815617 | orchestrator | 2025-03-23 13:40:11.815630 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-03-23 13:40:11.815642 | orchestrator | Sunday 23 March 2025 13:39:10 +0000 (0:00:09.928) 0:01:50.963 ********** 2025-03-23 13:40:11.815670 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:40:11.815682 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:40:11.815695 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815707 | orchestrator | 2025-03-23 13:40:11.815719 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:11.815732 | orchestrator | Sunday 23 March 2025 13:39:18 +0000 (0:00:08.492) 0:01:59.455 ********** 2025-03-23 13:40:11.815744 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:40:11.815757 | orchestrator | 2025-03-23 13:40:11.815769 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-03-23 13:40:11.815782 | orchestrator | Sunday 23 March 2025 13:39:19 +0000 (0:00:00.959) 0:02:00.414 ********** 2025-03-23 13:40:11.815794 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:11.815807 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:40:11.815819 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:40:11.815832 | orchestrator | 2025-03-23 13:40:11.815844 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-03-23 13:40:11.815857 | orchestrator | Sunday 23 March 2025 13:39:20 +0000 (0:00:01.182) 0:02:01.597 ********** 2025-03-23 13:40:11.815869 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:40:11.815881 | orchestrator | 2025-03-23 13:40:11.815894 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-03-23 13:40:11.815906 | orchestrator | Sunday 23 March 2025 13:39:22 +0000 (0:00:01.568) 0:02:03.165 ********** 2025-03-23 13:40:11.815918 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-03-23 13:40:11.815931 | orchestrator | 2025-03-23 13:40:11.815944 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-03-23 13:40:11.815956 | orchestrator | Sunday 23 March 2025 13:39:34 +0000 (0:00:11.694) 0:02:14.859 ********** 2025-03-23 13:40:11.815968 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-03-23 13:40:11.815980 | orchestrator | 2025-03-23 13:40:11.815993 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-03-23 13:40:11.816005 | orchestrator | Sunday 23 March 2025 13:39:56 +0000 (0:00:22.197) 0:02:37.057 ********** 2025-03-23 13:40:11.816017 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-03-23 13:40:11.816029 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-03-23 13:40:11.816042 | orchestrator | 2025-03-23 13:40:11.816054 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-03-23 13:40:11.816071 | orchestrator | Sunday 23 March 2025 13:40:04 +0000 (0:00:08.291) 0:02:45.349 ********** 2025-03-23 13:40:11.816083 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.816096 | orchestrator | 2025-03-23 13:40:11.816108 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-03-23 13:40:11.816121 | orchestrator | Sunday 23 March 2025 13:40:04 +0000 (0:00:00.133) 0:02:45.482 ********** 2025-03-23 13:40:11.816133 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:11.816145 | orchestrator | 2025-03-23 13:40:11.816158 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-03-23 13:40:11.816181 | orchestrator | Sunday 23 March 2025 13:40:04 +0000 (0:00:00.171) 0:02:45.654 ********** 2025-03-23 13:40:14.859462 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:14.859583 | orchestrator | 2025-03-23 13:40:14.859606 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-03-23 13:40:14.859622 | orchestrator | Sunday 23 March 2025 13:40:05 +0000 (0:00:00.144) 0:02:45.799 ********** 2025-03-23 13:40:14.859635 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:14.859697 | orchestrator | 2025-03-23 13:40:14.859715 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-03-23 13:40:14.859731 | orchestrator | Sunday 23 March 2025 13:40:05 +0000 (0:00:00.578) 0:02:46.377 ********** 2025-03-23 13:40:14.859747 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:40:14.859764 | orchestrator | 2025-03-23 13:40:14.859779 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-23 13:40:14.859793 | orchestrator | Sunday 23 March 2025 13:40:09 +0000 (0:00:03.986) 0:02:50.364 ********** 2025-03-23 13:40:14.859808 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:40:14.859822 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:40:14.859837 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:40:14.859872 | orchestrator | 2025-03-23 13:40:14.859887 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:40:14.859904 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-23 13:40:14.859921 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-23 13:40:14.859936 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-23 13:40:14.859950 | orchestrator | 2025-03-23 13:40:14.859965 | orchestrator | 2025-03-23 13:40:14.859980 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:40:14.859995 | orchestrator | Sunday 23 March 2025 13:40:10 +0000 (0:00:00.583) 0:02:50.948 ********** 2025-03-23 13:40:14.860010 | orchestrator | =============================================================================== 2025-03-23 13:40:14.860025 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.20s 2025-03-23 13:40:14.860043 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 13.96s 2025-03-23 13:40:14.860060 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.55s 2025-03-23 13:40:14.860077 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.69s 2025-03-23 13:40:14.860093 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.34s 2025-03-23 13:40:14.860110 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.28s 2025-03-23 13:40:14.860127 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.93s 2025-03-23 13:40:14.860144 | orchestrator | keystone : Restart keystone container ----------------------------------- 8.49s 2025-03-23 13:40:14.860160 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.29s 2025-03-23 13:40:14.860177 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.88s 2025-03-23 13:40:14.860194 | orchestrator | keystone : Creating default user role ----------------------------------- 3.99s 2025-03-23 13:40:14.860209 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.69s 2025-03-23 13:40:14.860225 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.63s 2025-03-23 13:40:14.860240 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2025-03-23 13:40:14.860256 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.17s 2025-03-23 13:40:14.860273 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.81s 2025-03-23 13:40:14.860315 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.80s 2025-03-23 13:40:14.860459 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.69s 2025-03-23 13:40:14.860481 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2025-03-23 13:40:14.860495 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.24s 2025-03-23 13:40:14.860509 | orchestrator | 2025-03-23 13:40:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:14.860542 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:17.903146 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:17.903246 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:17.903262 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:17.903276 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:17.903289 | orchestrator | 2025-03-23 13:40:14 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:17.903302 | orchestrator | 2025-03-23 13:40:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:17.903330 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:17.905580 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:17.907579 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:17.911139 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:17.912935 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:17.914463 | orchestrator | 2025-03-23 13:40:17 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:17.914640 | orchestrator | 2025-03-23 13:40:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:20.958568 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:20.959803 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:20.960403 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:20.961105 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:20.962547 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:24.027079 | orchestrator | 2025-03-23 13:40:20 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:24.027171 | orchestrator | 2025-03-23 13:40:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:24.027204 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:24.029582 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:24.040877 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:24.042898 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:24.042949 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:24.042970 | orchestrator | 2025-03-23 13:40:24 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:27.100149 | orchestrator | 2025-03-23 13:40:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:27.100277 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:27.101902 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:27.103879 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:27.105571 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:27.106799 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:27.108273 | orchestrator | 2025-03-23 13:40:27 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:30.160876 | orchestrator | 2025-03-23 13:40:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:30.161029 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:30.161716 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state STARTED 2025-03-23 13:40:30.161753 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:30.162801 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:30.163961 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:30.165382 | orchestrator | 2025-03-23 13:40:30 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:33.224096 | orchestrator | 2025-03-23 13:40:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:33.224196 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:33.226344 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:33.230549 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task 9e60dc9c-678b-4f42-b1a8-319b786003ba is in state SUCCESS 2025-03-23 13:40:33.232713 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:33.234443 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:33.236022 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:33.238100 | orchestrator | 2025-03-23 13:40:33 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:36.298183 | orchestrator | 2025-03-23 13:40:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:36.298299 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:36.300776 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:36.306443 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:36.309824 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:36.312281 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:36.314723 | orchestrator | 2025-03-23 13:40:36 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:36.315298 | orchestrator | 2025-03-23 13:40:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:39.367278 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:39.371844 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:39.388015 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:39.388053 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:42.441464 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:42.441576 | orchestrator | 2025-03-23 13:40:39 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:42.441596 | orchestrator | 2025-03-23 13:40:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:42.441628 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:42.442764 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:42.445481 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:42.446781 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:42.448617 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:42.450435 | orchestrator | 2025-03-23 13:40:42 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:45.521164 | orchestrator | 2025-03-23 13:40:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:45.521283 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:45.522124 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:45.523891 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:45.526118 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:45.528109 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:45.529740 | orchestrator | 2025-03-23 13:40:45 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:48.607066 | orchestrator | 2025-03-23 13:40:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:48.607192 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:48.609062 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:48.610261 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state STARTED 2025-03-23 13:40:48.611113 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:48.612972 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:48.615011 | orchestrator | 2025-03-23 13:40:48 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:51.671016 | orchestrator | 2025-03-23 13:40:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:51.671147 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:51.675772 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:51.678381 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task 953b0b65-7e39-4d6d-938e-14a56fdcb5da is in state SUCCESS 2025-03-23 13:40:51.681204 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:51.682580 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:51.686633 | orchestrator | 2025-03-23 13:40:51 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:51.687448 | orchestrator | 2025-03-23 13:40:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:54.743798 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:54.745225 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:40:54.746509 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:54.747470 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:54.748583 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:54.749748 | orchestrator | 2025-03-23 13:40:54 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:40:57.802821 | orchestrator | 2025-03-23 13:40:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:40:57.802934 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:40:57.804882 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:40:57.807813 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:40:57.809869 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:40:57.814223 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:40:57.815956 | orchestrator | 2025-03-23 13:40:57 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:00.860406 | orchestrator | 2025-03-23 13:40:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:00.860527 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:00.860943 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:00.862467 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:41:00.863379 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:00.864532 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:00.865464 | orchestrator | 2025-03-23 13:41:00 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:00.865913 | orchestrator | 2025-03-23 13:41:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:03.903928 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:03.909291 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:03.911243 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:41:03.914072 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:03.915569 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:03.916945 | orchestrator | 2025-03-23 13:41:03 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:03.917322 | orchestrator | 2025-03-23 13:41:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:06.980801 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:06.982119 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:06.982707 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:41:06.983916 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:06.985808 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:06.987132 | orchestrator | 2025-03-23 13:41:06 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:10.036195 | orchestrator | 2025-03-23 13:41:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:10.036310 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:10.038222 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:10.040128 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state STARTED 2025-03-23 13:41:10.041333 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:10.042963 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:13.101163 | orchestrator | 2025-03-23 13:41:10 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:13.101273 | orchestrator | 2025-03-23 13:41:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:13.101309 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:13.103982 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:13.106235 | orchestrator | 2025-03-23 13:41:13.106265 | orchestrator | 2025-03-23 13:41:13.106279 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-03-23 13:41:13.106320 | orchestrator | 2025-03-23 13:41:13.106335 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-03-23 13:41:13.106349 | orchestrator | Sunday 23 March 2025 13:39:43 +0000 (0:00:00.184) 0:00:00.184 ********** 2025-03-23 13:41:13.106364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-03-23 13:41:13.106393 | orchestrator | 2025-03-23 13:41:13.106407 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-03-23 13:41:13.106421 | orchestrator | Sunday 23 March 2025 13:39:43 +0000 (0:00:00.260) 0:00:00.445 ********** 2025-03-23 13:41:13.106435 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-03-23 13:41:13.106449 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-03-23 13:41:13.106464 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-03-23 13:41:13.106477 | orchestrator | 2025-03-23 13:41:13.106491 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-03-23 13:41:13.106505 | orchestrator | Sunday 23 March 2025 13:39:44 +0000 (0:00:01.310) 0:00:01.755 ********** 2025-03-23 13:41:13.106519 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-03-23 13:41:13.106533 | orchestrator | 2025-03-23 13:41:13.106547 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-03-23 13:41:13.106561 | orchestrator | Sunday 23 March 2025 13:39:46 +0000 (0:00:01.299) 0:00:03.055 ********** 2025-03-23 13:41:13.106575 | orchestrator | changed: [testbed-manager] 2025-03-23 13:41:13.106607 | orchestrator | 2025-03-23 13:41:13.106621 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-03-23 13:41:13.106635 | orchestrator | Sunday 23 March 2025 13:39:47 +0000 (0:00:01.124) 0:00:04.180 ********** 2025-03-23 13:41:13.106648 | orchestrator | changed: [testbed-manager] 2025-03-23 13:41:13.106701 | orchestrator | 2025-03-23 13:41:13.106716 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-03-23 13:41:13.106730 | orchestrator | Sunday 23 March 2025 13:39:48 +0000 (0:00:01.101) 0:00:05.282 ********** 2025-03-23 13:41:13.106744 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-03-23 13:41:13.106758 | orchestrator | ok: [testbed-manager] 2025-03-23 13:41:13.106772 | orchestrator | 2025-03-23 13:41:13.106786 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-03-23 13:41:13.106800 | orchestrator | Sunday 23 March 2025 13:40:19 +0000 (0:00:31.423) 0:00:36.705 ********** 2025-03-23 13:41:13.106814 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-03-23 13:41:13.106829 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-03-23 13:41:13.106845 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-03-23 13:41:13.106860 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-03-23 13:41:13.106875 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-03-23 13:41:13.106890 | orchestrator | 2025-03-23 13:41:13.106906 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-03-23 13:41:13.106920 | orchestrator | Sunday 23 March 2025 13:40:24 +0000 (0:00:04.389) 0:00:41.095 ********** 2025-03-23 13:41:13.106936 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-03-23 13:41:13.107038 | orchestrator | 2025-03-23 13:41:13.107056 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-03-23 13:41:13.107071 | orchestrator | Sunday 23 March 2025 13:40:24 +0000 (0:00:00.575) 0:00:41.671 ********** 2025-03-23 13:41:13.107087 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:41:13.107109 | orchestrator | 2025-03-23 13:41:13.107125 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-03-23 13:41:13.107141 | orchestrator | Sunday 23 March 2025 13:40:24 +0000 (0:00:00.153) 0:00:41.824 ********** 2025-03-23 13:41:13.107166 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:41:13.107182 | orchestrator | 2025-03-23 13:41:13.107197 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-03-23 13:41:13.107212 | orchestrator | Sunday 23 March 2025 13:40:25 +0000 (0:00:00.358) 0:00:42.182 ********** 2025-03-23 13:41:13.107225 | orchestrator | changed: [testbed-manager] 2025-03-23 13:41:13.107239 | orchestrator | 2025-03-23 13:41:13.107253 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-03-23 13:41:13.107267 | orchestrator | Sunday 23 March 2025 13:40:26 +0000 (0:00:01.765) 0:00:43.948 ********** 2025-03-23 13:41:13.107281 | orchestrator | changed: [testbed-manager] 2025-03-23 13:41:13.107294 | orchestrator | 2025-03-23 13:41:13.107308 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-03-23 13:41:13.107322 | orchestrator | Sunday 23 March 2025 13:40:28 +0000 (0:00:01.107) 0:00:45.056 ********** 2025-03-23 13:41:13.107336 | orchestrator | changed: [testbed-manager] 2025-03-23 13:41:13.107349 | orchestrator | 2025-03-23 13:41:13.107363 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-03-23 13:41:13.107377 | orchestrator | Sunday 23 March 2025 13:40:28 +0000 (0:00:00.671) 0:00:45.727 ********** 2025-03-23 13:41:13.107391 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-03-23 13:41:13.107410 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-03-23 13:41:13.107424 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-03-23 13:41:13.107438 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-03-23 13:41:13.107452 | orchestrator | 2025-03-23 13:41:13.107466 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:41:13.107480 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 13:41:13.107495 | orchestrator | 2025-03-23 13:41:13.107521 | orchestrator | Sunday 23 March 2025 13:40:30 +0000 (0:00:01.697) 0:00:47.425 ********** 2025-03-23 13:41:13.107536 | orchestrator | =============================================================================== 2025-03-23 13:41:13.107551 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 31.42s 2025-03-23 13:41:13.107564 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.39s 2025-03-23 13:41:13.107578 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.77s 2025-03-23 13:41:13.107592 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.70s 2025-03-23 13:41:13.107606 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2025-03-23 13:41:13.107620 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2025-03-23 13:41:13.107634 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.12s 2025-03-23 13:41:13.107647 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.11s 2025-03-23 13:41:13.107699 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.10s 2025-03-23 13:41:13.107713 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-03-23 13:41:13.107727 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.58s 2025-03-23 13:41:13.107741 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.36s 2025-03-23 13:41:13.107755 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-03-23 13:41:13.107769 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-03-23 13:41:13.107783 | orchestrator | 2025-03-23 13:41:13.107797 | orchestrator | 2025-03-23 13:41:13.107811 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-03-23 13:41:13.107825 | orchestrator | 2025-03-23 13:41:13.107839 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-03-23 13:41:13.107853 | orchestrator | Sunday 23 March 2025 13:40:15 +0000 (0:00:00.195) 0:00:00.195 ********** 2025-03-23 13:41:13.107875 | orchestrator | changed: [localhost] 2025-03-23 13:41:13.107889 | orchestrator | 2025-03-23 13:41:13.107903 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-03-23 13:41:13.107917 | orchestrator | Sunday 23 March 2025 13:40:16 +0000 (0:00:00.830) 0:00:01.025 ********** 2025-03-23 13:41:13.107931 | orchestrator | changed: [localhost] 2025-03-23 13:41:13.107945 | orchestrator | 2025-03-23 13:41:13.107959 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-03-23 13:41:13.107973 | orchestrator | Sunday 23 March 2025 13:40:46 +0000 (0:00:30.095) 0:00:31.121 ********** 2025-03-23 13:41:13.107987 | orchestrator | changed: [localhost] 2025-03-23 13:41:13.108000 | orchestrator | 2025-03-23 13:41:13.108014 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:41:13.108028 | orchestrator | 2025-03-23 13:41:13.108136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:41:13.108155 | orchestrator | Sunday 23 March 2025 13:40:49 +0000 (0:00:03.744) 0:00:34.865 ********** 2025-03-23 13:41:13.108169 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:41:13.108183 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:41:13.108196 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:41:13.108210 | orchestrator | 2025-03-23 13:41:13.108224 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:41:13.108238 | orchestrator | Sunday 23 March 2025 13:40:50 +0000 (0:00:00.447) 0:00:35.313 ********** 2025-03-23 13:41:13.108252 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-03-23 13:41:13.108265 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-03-23 13:41:13.108286 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-03-23 13:41:13.108300 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-03-23 13:41:13.108314 | orchestrator | 2025-03-23 13:41:13.108328 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-03-23 13:41:13.108342 | orchestrator | skipping: no hosts matched 2025-03-23 13:41:13.108356 | orchestrator | 2025-03-23 13:41:13.108370 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:41:13.108384 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:41:13.108399 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:41:13.108413 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:41:13.108427 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:41:13.108441 | orchestrator | 2025-03-23 13:41:13.108455 | orchestrator | 2025-03-23 13:41:13.108469 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:41:13.108483 | orchestrator | Sunday 23 March 2025 13:40:50 +0000 (0:00:00.515) 0:00:35.828 ********** 2025-03-23 13:41:13.108496 | orchestrator | =============================================================================== 2025-03-23 13:41:13.108510 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.10s 2025-03-23 13:41:13.108524 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.74s 2025-03-23 13:41:13.108538 | orchestrator | Ensure the destination directory exists --------------------------------- 0.83s 2025-03-23 13:41:13.108552 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-03-23 13:41:13.108565 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-03-23 13:41:13.108579 | orchestrator | 2025-03-23 13:41:13.108601 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task acedb010-b848-4aa3-965a-1ffc3ecc2e78 is in state SUCCESS 2025-03-23 13:41:13.109220 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:13.109254 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:13.109272 | orchestrator | 2025-03-23 13:41:13 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:13.109904 | orchestrator | 2025-03-23 13:41:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:16.161265 | orchestrator | 2025-03-23 13:41:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:16.161855 | orchestrator | 2025-03-23 13:41:16 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:16.161888 | orchestrator | 2025-03-23 13:41:16 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:16.161904 | orchestrator | 2025-03-23 13:41:16 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:16.161925 | orchestrator | 2025-03-23 13:41:16 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:19.193610 | orchestrator | 2025-03-23 13:41:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:19.193797 | orchestrator | 2025-03-23 13:41:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:19.195800 | orchestrator | 2025-03-23 13:41:19 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:19.198949 | orchestrator | 2025-03-23 13:41:19 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:19.199263 | orchestrator | 2025-03-23 13:41:19 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:22.232364 | orchestrator | 2025-03-23 13:41:19 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:22.232472 | orchestrator | 2025-03-23 13:41:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:22.232505 | orchestrator | 2025-03-23 13:41:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:22.233097 | orchestrator | 2025-03-23 13:41:22 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:22.233433 | orchestrator | 2025-03-23 13:41:22 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:22.234136 | orchestrator | 2025-03-23 13:41:22 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:22.238383 | orchestrator | 2025-03-23 13:41:22 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:22.238723 | orchestrator | 2025-03-23 13:41:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:25.281780 | orchestrator | 2025-03-23 13:41:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:25.282078 | orchestrator | 2025-03-23 13:41:25 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:25.284084 | orchestrator | 2025-03-23 13:41:25 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:25.284889 | orchestrator | 2025-03-23 13:41:25 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:25.285460 | orchestrator | 2025-03-23 13:41:25 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:28.331968 | orchestrator | 2025-03-23 13:41:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:28.332095 | orchestrator | 2025-03-23 13:41:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:28.332768 | orchestrator | 2025-03-23 13:41:28 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:28.332804 | orchestrator | 2025-03-23 13:41:28 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:28.333404 | orchestrator | 2025-03-23 13:41:28 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:28.334343 | orchestrator | 2025-03-23 13:41:28 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:31.394285 | orchestrator | 2025-03-23 13:41:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:31.394412 | orchestrator | 2025-03-23 13:41:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:31.394896 | orchestrator | 2025-03-23 13:41:31 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:31.396406 | orchestrator | 2025-03-23 13:41:31 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:31.397452 | orchestrator | 2025-03-23 13:41:31 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:31.398434 | orchestrator | 2025-03-23 13:41:31 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:34.443479 | orchestrator | 2025-03-23 13:41:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:34.443604 | orchestrator | 2025-03-23 13:41:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:37.498000 | orchestrator | 2025-03-23 13:41:34 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:37.498159 | orchestrator | 2025-03-23 13:41:34 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:37.498179 | orchestrator | 2025-03-23 13:41:34 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:37.498194 | orchestrator | 2025-03-23 13:41:34 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:37.498209 | orchestrator | 2025-03-23 13:41:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:37.498241 | orchestrator | 2025-03-23 13:41:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:37.500159 | orchestrator | 2025-03-23 13:41:37 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:37.501166 | orchestrator | 2025-03-23 13:41:37 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:37.501197 | orchestrator | 2025-03-23 13:41:37 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:37.502765 | orchestrator | 2025-03-23 13:41:37 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:40.570266 | orchestrator | 2025-03-23 13:41:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:40.570392 | orchestrator | 2025-03-23 13:41:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:40.575785 | orchestrator | 2025-03-23 13:41:40 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:40.580684 | orchestrator | 2025-03-23 13:41:40 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:40.582349 | orchestrator | 2025-03-23 13:41:40 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:40.583282 | orchestrator | 2025-03-23 13:41:40 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:40.585995 | orchestrator | 2025-03-23 13:41:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:43.632601 | orchestrator | 2025-03-23 13:41:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:43.634144 | orchestrator | 2025-03-23 13:41:43 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:43.635611 | orchestrator | 2025-03-23 13:41:43 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:43.640323 | orchestrator | 2025-03-23 13:41:43 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:46.692920 | orchestrator | 2025-03-23 13:41:43 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:46.693037 | orchestrator | 2025-03-23 13:41:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:46.693075 | orchestrator | 2025-03-23 13:41:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:46.694130 | orchestrator | 2025-03-23 13:41:46 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:46.694166 | orchestrator | 2025-03-23 13:41:46 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:46.694588 | orchestrator | 2025-03-23 13:41:46 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:46.695637 | orchestrator | 2025-03-23 13:41:46 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:49.747374 | orchestrator | 2025-03-23 13:41:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:49.747498 | orchestrator | 2025-03-23 13:41:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:49.750992 | orchestrator | 2025-03-23 13:41:49 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:49.752656 | orchestrator | 2025-03-23 13:41:49 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:49.754558 | orchestrator | 2025-03-23 13:41:49 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:49.755427 | orchestrator | 2025-03-23 13:41:49 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:49.755765 | orchestrator | 2025-03-23 13:41:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:52.806475 | orchestrator | 2025-03-23 13:41:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:52.807788 | orchestrator | 2025-03-23 13:41:52 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:52.808508 | orchestrator | 2025-03-23 13:41:52 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:52.809873 | orchestrator | 2025-03-23 13:41:52 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:52.811345 | orchestrator | 2025-03-23 13:41:52 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:55.863074 | orchestrator | 2025-03-23 13:41:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:55.863191 | orchestrator | 2025-03-23 13:41:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:55.865245 | orchestrator | 2025-03-23 13:41:55 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:55.866609 | orchestrator | 2025-03-23 13:41:55 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:55.868417 | orchestrator | 2025-03-23 13:41:55 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:55.869739 | orchestrator | 2025-03-23 13:41:55 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:41:55.869914 | orchestrator | 2025-03-23 13:41:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:41:58.931235 | orchestrator | 2025-03-23 13:41:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:41:58.932891 | orchestrator | 2025-03-23 13:41:58 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:41:58.933528 | orchestrator | 2025-03-23 13:41:58 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:41:58.935030 | orchestrator | 2025-03-23 13:41:58 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:41:58.935549 | orchestrator | 2025-03-23 13:41:58 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:01.977433 | orchestrator | 2025-03-23 13:41:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:01.977636 | orchestrator | 2025-03-23 13:42:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:01.978535 | orchestrator | 2025-03-23 13:42:01 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:01.978566 | orchestrator | 2025-03-23 13:42:01 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:01.978587 | orchestrator | 2025-03-23 13:42:01 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:01.979188 | orchestrator | 2025-03-23 13:42:01 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:05.030521 | orchestrator | 2025-03-23 13:42:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:05.030781 | orchestrator | 2025-03-23 13:42:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:05.032240 | orchestrator | 2025-03-23 13:42:05 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:05.032277 | orchestrator | 2025-03-23 13:42:05 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:05.033167 | orchestrator | 2025-03-23 13:42:05 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:05.034402 | orchestrator | 2025-03-23 13:42:05 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:08.081933 | orchestrator | 2025-03-23 13:42:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:08.082092 | orchestrator | 2025-03-23 13:42:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:08.086954 | orchestrator | 2025-03-23 13:42:08 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:08.088737 | orchestrator | 2025-03-23 13:42:08 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:08.089880 | orchestrator | 2025-03-23 13:42:08 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:08.090705 | orchestrator | 2025-03-23 13:42:08 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:11.132293 | orchestrator | 2025-03-23 13:42:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:11.132419 | orchestrator | 2025-03-23 13:42:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:11.132937 | orchestrator | 2025-03-23 13:42:11 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:11.133857 | orchestrator | 2025-03-23 13:42:11 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:11.134834 | orchestrator | 2025-03-23 13:42:11 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:11.135480 | orchestrator | 2025-03-23 13:42:11 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:14.184329 | orchestrator | 2025-03-23 13:42:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:14.184457 | orchestrator | 2025-03-23 13:42:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:14.187209 | orchestrator | 2025-03-23 13:42:14 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:14.189897 | orchestrator | 2025-03-23 13:42:14 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:14.191176 | orchestrator | 2025-03-23 13:42:14 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:14.191226 | orchestrator | 2025-03-23 13:42:14 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:14.193716 | orchestrator | 2025-03-23 13:42:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:17.234348 | orchestrator | 2025-03-23 13:42:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:17.234756 | orchestrator | 2025-03-23 13:42:17 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:17.234799 | orchestrator | 2025-03-23 13:42:17 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:17.237426 | orchestrator | 2025-03-23 13:42:17 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:17.239246 | orchestrator | 2025-03-23 13:42:17 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:20.285545 | orchestrator | 2025-03-23 13:42:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:20.285724 | orchestrator | 2025-03-23 13:42:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:20.286256 | orchestrator | 2025-03-23 13:42:20 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:20.286293 | orchestrator | 2025-03-23 13:42:20 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:20.286957 | orchestrator | 2025-03-23 13:42:20 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:20.289595 | orchestrator | 2025-03-23 13:42:20 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:23.332615 | orchestrator | 2025-03-23 13:42:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:23.332794 | orchestrator | 2025-03-23 13:42:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:23.333391 | orchestrator | 2025-03-23 13:42:23 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:23.334211 | orchestrator | 2025-03-23 13:42:23 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:23.339609 | orchestrator | 2025-03-23 13:42:23 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:23.340353 | orchestrator | 2025-03-23 13:42:23 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:26.388097 | orchestrator | 2025-03-23 13:42:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:26.388251 | orchestrator | 2025-03-23 13:42:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:26.389821 | orchestrator | 2025-03-23 13:42:26 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:26.390839 | orchestrator | 2025-03-23 13:42:26 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:26.392067 | orchestrator | 2025-03-23 13:42:26 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:26.394562 | orchestrator | 2025-03-23 13:42:26 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:26.394735 | orchestrator | 2025-03-23 13:42:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:29.441767 | orchestrator | 2025-03-23 13:42:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:29.442895 | orchestrator | 2025-03-23 13:42:29 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:29.445347 | orchestrator | 2025-03-23 13:42:29 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:29.448437 | orchestrator | 2025-03-23 13:42:29 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:29.449729 | orchestrator | 2025-03-23 13:42:29 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:29.450197 | orchestrator | 2025-03-23 13:42:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:32.500917 | orchestrator | 2025-03-23 13:42:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:32.502185 | orchestrator | 2025-03-23 13:42:32 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:32.503767 | orchestrator | 2025-03-23 13:42:32 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:32.505127 | orchestrator | 2025-03-23 13:42:32 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:32.507054 | orchestrator | 2025-03-23 13:42:32 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:35.566446 | orchestrator | 2025-03-23 13:42:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:35.566576 | orchestrator | 2025-03-23 13:42:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:35.569977 | orchestrator | 2025-03-23 13:42:35 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:35.572145 | orchestrator | 2025-03-23 13:42:35 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:35.576214 | orchestrator | 2025-03-23 13:42:35 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:35.579881 | orchestrator | 2025-03-23 13:42:35 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:38.620337 | orchestrator | 2025-03-23 13:42:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:38.620470 | orchestrator | 2025-03-23 13:42:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:38.621515 | orchestrator | 2025-03-23 13:42:38 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:38.623589 | orchestrator | 2025-03-23 13:42:38 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:38.626078 | orchestrator | 2025-03-23 13:42:38 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:38.627400 | orchestrator | 2025-03-23 13:42:38 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:41.694906 | orchestrator | 2025-03-23 13:42:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:41.695037 | orchestrator | 2025-03-23 13:42:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:41.695697 | orchestrator | 2025-03-23 13:42:41 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state STARTED 2025-03-23 13:42:41.697864 | orchestrator | 2025-03-23 13:42:41 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:41.700124 | orchestrator | 2025-03-23 13:42:41 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:44.745497 | orchestrator | 2025-03-23 13:42:41 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:44.745602 | orchestrator | 2025-03-23 13:42:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:44.745635 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:44.746414 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task b99aa15e-a210-438d-a6b8-5250a6d10a29 is in state SUCCESS 2025-03-23 13:42:44.747977 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-23 13:42:44.748097 | orchestrator | 2025-03-23 13:42:44.748114 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-03-23 13:42:44.748129 | orchestrator | 2025-03-23 13:42:44.748143 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-03-23 13:42:44.748157 | orchestrator | Sunday 23 March 2025 13:40:35 +0000 (0:00:00.568) 0:00:00.568 ********** 2025-03-23 13:42:44.748171 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748204 | orchestrator | 2025-03-23 13:42:44.748218 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-03-23 13:42:44.748233 | orchestrator | Sunday 23 March 2025 13:40:37 +0000 (0:00:01.828) 0:00:02.397 ********** 2025-03-23 13:42:44.748247 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748261 | orchestrator | 2025-03-23 13:42:44.748275 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-03-23 13:42:44.748289 | orchestrator | Sunday 23 March 2025 13:40:38 +0000 (0:00:01.201) 0:00:03.598 ********** 2025-03-23 13:42:44.748303 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748317 | orchestrator | 2025-03-23 13:42:44.748331 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-03-23 13:42:44.748345 | orchestrator | Sunday 23 March 2025 13:40:39 +0000 (0:00:01.182) 0:00:04.781 ********** 2025-03-23 13:42:44.748359 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748373 | orchestrator | 2025-03-23 13:42:44.748388 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-03-23 13:42:44.748402 | orchestrator | Sunday 23 March 2025 13:40:40 +0000 (0:00:01.114) 0:00:05.895 ********** 2025-03-23 13:42:44.748416 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748430 | orchestrator | 2025-03-23 13:42:44.748444 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-03-23 13:42:44.748458 | orchestrator | Sunday 23 March 2025 13:40:41 +0000 (0:00:01.157) 0:00:07.053 ********** 2025-03-23 13:42:44.748472 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748485 | orchestrator | 2025-03-23 13:42:44.748499 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-03-23 13:42:44.748513 | orchestrator | Sunday 23 March 2025 13:40:42 +0000 (0:00:01.086) 0:00:08.140 ********** 2025-03-23 13:42:44.748527 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748541 | orchestrator | 2025-03-23 13:42:44.748555 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-03-23 13:42:44.748592 | orchestrator | Sunday 23 March 2025 13:40:45 +0000 (0:00:02.216) 0:00:10.357 ********** 2025-03-23 13:42:44.748607 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748621 | orchestrator | 2025-03-23 13:42:44.748635 | orchestrator | TASK [Create admin user] ******************************************************* 2025-03-23 13:42:44.748649 | orchestrator | Sunday 23 March 2025 13:40:46 +0000 (0:00:01.258) 0:00:11.616 ********** 2025-03-23 13:42:44.748663 | orchestrator | changed: [testbed-manager] 2025-03-23 13:42:44.748698 | orchestrator | 2025-03-23 13:42:44.748714 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-03-23 13:42:44.748729 | orchestrator | Sunday 23 March 2025 13:41:05 +0000 (0:00:19.227) 0:00:30.843 ********** 2025-03-23 13:42:44.748745 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:42:44.748760 | orchestrator | 2025-03-23 13:42:44.748781 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-23 13:42:44.748797 | orchestrator | 2025-03-23 13:42:44.748813 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-23 13:42:44.748828 | orchestrator | Sunday 23 March 2025 13:41:06 +0000 (0:00:00.749) 0:00:31.592 ********** 2025-03-23 13:42:44.748843 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.748859 | orchestrator | 2025-03-23 13:42:44.748874 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-23 13:42:44.748889 | orchestrator | 2025-03-23 13:42:44.748904 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-23 13:42:44.748919 | orchestrator | Sunday 23 March 2025 13:41:08 +0000 (0:00:02.422) 0:00:34.015 ********** 2025-03-23 13:42:44.748935 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:44.748950 | orchestrator | 2025-03-23 13:42:44.748966 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-23 13:42:44.748981 | orchestrator | 2025-03-23 13:42:44.748996 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-23 13:42:44.749012 | orchestrator | Sunday 23 March 2025 13:41:10 +0000 (0:00:02.051) 0:00:36.066 ********** 2025-03-23 13:42:44.749026 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:44.749041 | orchestrator | 2025-03-23 13:42:44.749148 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:42:44.749169 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-23 13:42:44.749185 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:42:44.749242 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:42:44.749259 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:42:44.749273 | orchestrator | 2025-03-23 13:42:44.749287 | orchestrator | 2025-03-23 13:42:44.749301 | orchestrator | 2025-03-23 13:42:44.749315 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:42:44.749329 | orchestrator | Sunday 23 March 2025 13:41:12 +0000 (0:00:01.582) 0:00:37.649 ********** 2025-03-23 13:42:44.749343 | orchestrator | =============================================================================== 2025-03-23 13:42:44.749357 | orchestrator | Create admin user ------------------------------------------------------ 19.23s 2025-03-23 13:42:44.749380 | orchestrator | Restart ceph manager service -------------------------------------------- 6.06s 2025-03-23 13:42:44.749395 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.22s 2025-03-23 13:42:44.749409 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.83s 2025-03-23 13:42:44.749423 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.26s 2025-03-23 13:42:44.749437 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.20s 2025-03-23 13:42:44.749461 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.18s 2025-03-23 13:42:44.749475 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2025-03-23 13:42:44.749488 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.11s 2025-03-23 13:42:44.749502 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2025-03-23 13:42:44.749516 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.75s 2025-03-23 13:42:44.749530 | orchestrator | 2025-03-23 13:42:44.749544 | orchestrator | 2025-03-23 13:42:44.749557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:42:44.749571 | orchestrator | 2025-03-23 13:42:44.749585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:42:44.749599 | orchestrator | Sunday 23 March 2025 13:40:54 +0000 (0:00:00.431) 0:00:00.431 ********** 2025-03-23 13:42:44.749612 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:42:44.749627 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:42:44.749641 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:42:44.749655 | orchestrator | 2025-03-23 13:42:44.749688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:42:44.749704 | orchestrator | Sunday 23 March 2025 13:40:55 +0000 (0:00:00.550) 0:00:00.981 ********** 2025-03-23 13:42:44.749718 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-03-23 13:42:44.749732 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-03-23 13:42:44.749746 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-03-23 13:42:44.749760 | orchestrator | 2025-03-23 13:42:44.749774 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-03-23 13:42:44.749790 | orchestrator | 2025-03-23 13:42:44.749805 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-23 13:42:44.749820 | orchestrator | Sunday 23 March 2025 13:40:55 +0000 (0:00:00.434) 0:00:01.415 ********** 2025-03-23 13:42:44.749835 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:42:44.749852 | orchestrator | 2025-03-23 13:42:44.749872 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-03-23 13:42:44.749888 | orchestrator | Sunday 23 March 2025 13:40:56 +0000 (0:00:00.818) 0:00:02.234 ********** 2025-03-23 13:42:44.749904 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-03-23 13:42:44.749919 | orchestrator | 2025-03-23 13:42:44.749934 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-03-23 13:42:44.749953 | orchestrator | Sunday 23 March 2025 13:41:00 +0000 (0:00:04.053) 0:00:06.287 ********** 2025-03-23 13:42:44.749969 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-03-23 13:42:44.749985 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-03-23 13:42:44.750001 | orchestrator | 2025-03-23 13:42:44.750078 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-03-23 13:42:44.750098 | orchestrator | Sunday 23 March 2025 13:41:08 +0000 (0:00:07.542) 0:00:13.830 ********** 2025-03-23 13:42:44.750114 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:42:44.750130 | orchestrator | 2025-03-23 13:42:44.750144 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-03-23 13:42:44.750158 | orchestrator | Sunday 23 March 2025 13:41:12 +0000 (0:00:04.670) 0:00:18.500 ********** 2025-03-23 13:42:44.750171 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:42:44.750185 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-03-23 13:42:44.750199 | orchestrator | 2025-03-23 13:42:44.750213 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-03-23 13:42:44.750227 | orchestrator | Sunday 23 March 2025 13:41:17 +0000 (0:00:04.332) 0:00:22.832 ********** 2025-03-23 13:42:44.750248 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:42:44.750262 | orchestrator | 2025-03-23 13:42:44.750276 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-03-23 13:42:44.750290 | orchestrator | Sunday 23 March 2025 13:41:21 +0000 (0:00:03.810) 0:00:26.643 ********** 2025-03-23 13:42:44.750304 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-03-23 13:42:44.750318 | orchestrator | 2025-03-23 13:42:44.750332 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-23 13:42:44.750346 | orchestrator | Sunday 23 March 2025 13:41:26 +0000 (0:00:05.595) 0:00:32.238 ********** 2025-03-23 13:42:44.750360 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.750374 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:44.750388 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:44.750402 | orchestrator | 2025-03-23 13:42:44.750416 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-03-23 13:42:44.750430 | orchestrator | Sunday 23 March 2025 13:41:27 +0000 (0:00:00.894) 0:00:33.132 ********** 2025-03-23 13:42:44.750458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750510 | orchestrator | 2025-03-23 13:42:44.750524 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-03-23 13:42:44.750545 | orchestrator | Sunday 23 March 2025 13:41:30 +0000 (0:00:03.176) 0:00:36.309 ********** 2025-03-23 13:42:44.750559 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.750573 | orchestrator | 2025-03-23 13:42:44.750587 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-03-23 13:42:44.750601 | orchestrator | Sunday 23 March 2025 13:41:31 +0000 (0:00:00.488) 0:00:36.798 ********** 2025-03-23 13:42:44.750615 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.750629 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:44.750643 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:44.750656 | orchestrator | 2025-03-23 13:42:44.750700 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-23 13:42:44.750715 | orchestrator | Sunday 23 March 2025 13:41:32 +0000 (0:00:01.364) 0:00:38.163 ********** 2025-03-23 13:42:44.750729 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:42:44.750743 | orchestrator | 2025-03-23 13:42:44.750757 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-03-23 13:42:44.750771 | orchestrator | Sunday 23 March 2025 13:41:33 +0000 (0:00:01.349) 0:00:39.512 ********** 2025-03-23 13:42:44.750794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.750847 | orchestrator | 2025-03-23 13:42:44.750861 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-03-23 13:42:44.750875 | orchestrator | Sunday 23 March 2025 13:41:37 +0000 (0:00:03.770) 0:00:43.282 ********** 2025-03-23 13:42:44.750889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.750904 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.750918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.750938 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:44.750953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.750967 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:44.750981 | orchestrator | 2025-03-23 13:42:44.750995 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-03-23 13:42:44.751009 | orchestrator | Sunday 23 March 2025 13:41:40 +0000 (0:00:02.970) 0:00:46.252 ********** 2025-03-23 13:42:44.751024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751046 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.751060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751075 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:44.751090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751104 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:44.751118 | orchestrator | 2025-03-23 13:42:44.751137 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-03-23 13:42:44.751152 | orchestrator | Sunday 23 March 2025 13:41:43 +0000 (0:00:02.647) 0:00:48.899 ********** 2025-03-23 13:42:44.751166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751319 | orchestrator | 2025-03-23 13:42:44.751333 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-03-23 13:42:44.751348 | orchestrator | Sunday 23 March 2025 13:41:46 +0000 (0:00:03.346) 0:00:52.246 ********** 2025-03-23 13:42:44.751362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751437 | orchestrator | 2025-03-23 13:42:44.751451 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-03-23 13:42:44.751465 | orchestrator | Sunday 23 March 2025 13:41:53 +0000 (0:00:07.126) 0:00:59.372 ********** 2025-03-23 13:42:44.751480 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-23 13:42:44.751494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-23 13:42:44.751508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-23 13:42:44.751522 | orchestrator | 2025-03-23 13:42:44.751536 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-03-23 13:42:44.751551 | orchestrator | Sunday 23 March 2025 13:41:58 +0000 (0:00:04.739) 0:01:04.112 ********** 2025-03-23 13:42:44.751565 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:44.751658 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.751693 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:44.751708 | orchestrator | 2025-03-23 13:42:44.751722 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-03-23 13:42:44.751736 | orchestrator | Sunday 23 March 2025 13:42:01 +0000 (0:00:02.842) 0:01:06.954 ********** 2025-03-23 13:42:44.751751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751766 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:44.751790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751805 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:44.751820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 13:42:44.751843 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:44.751863 | orchestrator | 2025-03-23 13:42:44.751877 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-03-23 13:42:44.751891 | orchestrator | Sunday 23 March 2025 13:42:03 +0000 (0:00:02.492) 0:01:09.447 ********** 2025-03-23 13:42:44.751910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:44.751964 | orchestrator | 2025-03-23 13:42:44.751978 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-03-23 13:42:44.751992 | orchestrator | Sunday 23 March 2025 13:42:06 +0000 (0:00:02.816) 0:01:12.264 ********** 2025-03-23 13:42:44.752013 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.752027 | orchestrator | 2025-03-23 13:42:44.752041 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-03-23 13:42:44.752055 | orchestrator | Sunday 23 March 2025 13:42:10 +0000 (0:00:03.510) 0:01:15.774 ********** 2025-03-23 13:42:44.752069 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.752083 | orchestrator | 2025-03-23 13:42:44.752097 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-03-23 13:42:44.752110 | orchestrator | Sunday 23 March 2025 13:42:12 +0000 (0:00:02.739) 0:01:18.514 ********** 2025-03-23 13:42:44.752125 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.752138 | orchestrator | 2025-03-23 13:42:44.752152 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-23 13:42:44.752166 | orchestrator | Sunday 23 March 2025 13:42:28 +0000 (0:00:15.095) 0:01:33.610 ********** 2025-03-23 13:42:44.752180 | orchestrator | 2025-03-23 13:42:44.752194 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-23 13:42:44.752208 | orchestrator | Sunday 23 March 2025 13:42:28 +0000 (0:00:00.218) 0:01:33.828 ********** 2025-03-23 13:42:44.752221 | orchestrator | 2025-03-23 13:42:44.752235 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-23 13:42:44.752249 | orchestrator | Sunday 23 March 2025 13:42:29 +0000 (0:00:00.834) 0:01:34.662 ********** 2025-03-23 13:42:44.752263 | orchestrator | 2025-03-23 13:42:44.752277 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-03-23 13:42:44.752293 | orchestrator | Sunday 23 March 2025 13:42:29 +0000 (0:00:00.443) 0:01:35.106 ********** 2025-03-23 13:42:44.752309 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:44.752324 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:44.752339 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:44.752354 | orchestrator | 2025-03-23 13:42:44.752370 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:42:44.752385 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:42:44.752401 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:42:44.752417 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 13:42:44.752432 | orchestrator | 2025-03-23 13:42:44.752448 | orchestrator | 2025-03-23 13:42:44.752463 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:42:44.752478 | orchestrator | Sunday 23 March 2025 13:42:42 +0000 (0:00:12.463) 0:01:47.569 ********** 2025-03-23 13:42:44.752494 | orchestrator | =============================================================================== 2025-03-23 13:42:44.752509 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.10s 2025-03-23 13:42:44.752525 | orchestrator | placement : Restart placement-api container ---------------------------- 12.46s 2025-03-23 13:42:44.752540 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.54s 2025-03-23 13:42:44.752555 | orchestrator | placement : Copying over placement.conf --------------------------------- 7.13s 2025-03-23 13:42:44.752570 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.60s 2025-03-23 13:42:44.752590 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 4.74s 2025-03-23 13:42:44.752607 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.67s 2025-03-23 13:42:44.752623 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.33s 2025-03-23 13:42:44.752638 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.05s 2025-03-23 13:42:44.752660 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.81s 2025-03-23 13:42:44.752713 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.77s 2025-03-23 13:42:44.752736 | orchestrator | placement : Creating placement databases -------------------------------- 3.51s 2025-03-23 13:42:44.752759 | orchestrator | placement : Copying over config.json files for services ----------------- 3.35s 2025-03-23 13:42:44.752781 | orchestrator | placement : Ensuring config directories exist --------------------------- 3.18s 2025-03-23 13:42:44.752804 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.97s 2025-03-23 13:42:44.752828 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.84s 2025-03-23 13:42:44.752852 | orchestrator | placement : Check placement containers ---------------------------------- 2.82s 2025-03-23 13:42:44.752870 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.74s 2025-03-23 13:42:44.752884 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.65s 2025-03-23 13:42:44.752897 | orchestrator | placement : Copying over existing policy file --------------------------- 2.49s 2025-03-23 13:42:44.752911 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:44.752932 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:47.805975 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:47.806132 | orchestrator | 2025-03-23 13:42:44 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:47.806154 | orchestrator | 2025-03-23 13:42:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:47.806186 | orchestrator | 2025-03-23 13:42:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:47.806562 | orchestrator | 2025-03-23 13:42:47 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:47.807641 | orchestrator | 2025-03-23 13:42:47 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:47.808768 | orchestrator | 2025-03-23 13:42:47 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:47.810834 | orchestrator | 2025-03-23 13:42:47 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:50.836747 | orchestrator | 2025-03-23 13:42:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:50.836879 | orchestrator | 2025-03-23 13:42:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:50.837266 | orchestrator | 2025-03-23 13:42:50 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:50.837713 | orchestrator | 2025-03-23 13:42:50 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:50.838465 | orchestrator | 2025-03-23 13:42:50 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:50.839113 | orchestrator | 2025-03-23 13:42:50 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state STARTED 2025-03-23 13:42:53.862310 | orchestrator | 2025-03-23 13:42:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:53.862406 | orchestrator | 2025-03-23 13:42:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:53.863395 | orchestrator | 2025-03-23 13:42:53 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:53.864116 | orchestrator | 2025-03-23 13:42:53 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:53.864746 | orchestrator | 2025-03-23 13:42:53 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:53.866332 | orchestrator | 2025-03-23 13:42:53 | INFO  | Task 6ce39303-0d72-49d3-a64c-a3a291c3d789 is in state SUCCESS 2025-03-23 13:42:53.867871 | orchestrator | 2025-03-23 13:42:53.867892 | orchestrator | 2025-03-23 13:42:53.867898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:42:53.867904 | orchestrator | 2025-03-23 13:42:53.867910 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:42:53.867917 | orchestrator | Sunday 23 March 2025 13:40:15 +0000 (0:00:00.469) 0:00:00.469 ********** 2025-03-23 13:42:53.867922 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:42:53.867929 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:42:53.867935 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:42:53.867940 | orchestrator | 2025-03-23 13:42:53.867946 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:42:53.867951 | orchestrator | Sunday 23 March 2025 13:40:16 +0000 (0:00:00.738) 0:00:01.208 ********** 2025-03-23 13:42:53.867957 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-03-23 13:42:53.867963 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-03-23 13:42:53.867968 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-03-23 13:42:53.867973 | orchestrator | 2025-03-23 13:42:53.867979 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-03-23 13:42:53.867984 | orchestrator | 2025-03-23 13:42:53.867989 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-23 13:42:53.867995 | orchestrator | Sunday 23 March 2025 13:40:16 +0000 (0:00:00.379) 0:00:01.587 ********** 2025-03-23 13:42:53.868000 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:42:53.868007 | orchestrator | 2025-03-23 13:42:53.868013 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-03-23 13:42:53.868018 | orchestrator | Sunday 23 March 2025 13:40:17 +0000 (0:00:01.051) 0:00:02.638 ********** 2025-03-23 13:42:53.868024 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-03-23 13:42:53.868029 | orchestrator | 2025-03-23 13:42:53.868034 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-03-23 13:42:53.868040 | orchestrator | Sunday 23 March 2025 13:40:21 +0000 (0:00:03.803) 0:00:06.442 ********** 2025-03-23 13:42:53.868045 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-03-23 13:42:53.868051 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-03-23 13:42:53.868056 | orchestrator | 2025-03-23 13:42:53.868061 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-03-23 13:42:53.868067 | orchestrator | Sunday 23 March 2025 13:40:29 +0000 (0:00:07.667) 0:00:14.109 ********** 2025-03-23 13:42:53.868072 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-03-23 13:42:53.868078 | orchestrator | 2025-03-23 13:42:53.868083 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-03-23 13:42:53.868089 | orchestrator | Sunday 23 March 2025 13:40:33 +0000 (0:00:04.189) 0:00:18.299 ********** 2025-03-23 13:42:53.868094 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:42:53.868100 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-03-23 13:42:53.868105 | orchestrator | 2025-03-23 13:42:53.868110 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-03-23 13:42:53.868127 | orchestrator | Sunday 23 March 2025 13:40:37 +0000 (0:00:04.425) 0:00:22.724 ********** 2025-03-23 13:42:53.868133 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:42:53.868139 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-03-23 13:42:53.868144 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-03-23 13:42:53.868213 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-03-23 13:42:53.868220 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-03-23 13:42:53.868226 | orchestrator | 2025-03-23 13:42:53.868232 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-03-23 13:42:53.868237 | orchestrator | Sunday 23 March 2025 13:40:55 +0000 (0:00:17.951) 0:00:40.676 ********** 2025-03-23 13:42:53.868243 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-03-23 13:42:53.868248 | orchestrator | 2025-03-23 13:42:53.868254 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-03-23 13:42:53.868259 | orchestrator | Sunday 23 March 2025 13:41:01 +0000 (0:00:05.633) 0:00:46.309 ********** 2025-03-23 13:42:53.868281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868549 | orchestrator | 2025-03-23 13:42:53.868554 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-03-23 13:42:53.868560 | orchestrator | Sunday 23 March 2025 13:41:04 +0000 (0:00:03.204) 0:00:49.514 ********** 2025-03-23 13:42:53.868569 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-03-23 13:42:53.868574 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-03-23 13:42:53.868579 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-03-23 13:42:53.868584 | orchestrator | 2025-03-23 13:42:53.868589 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-03-23 13:42:53.868594 | orchestrator | Sunday 23 March 2025 13:41:07 +0000 (0:00:03.200) 0:00:52.715 ********** 2025-03-23 13:42:53.868599 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.868604 | orchestrator | 2025-03-23 13:42:53.868609 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-03-23 13:42:53.868614 | orchestrator | Sunday 23 March 2025 13:41:08 +0000 (0:00:00.420) 0:00:53.136 ********** 2025-03-23 13:42:53.868619 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.868624 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.868629 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.868634 | orchestrator | 2025-03-23 13:42:53.868640 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-23 13:42:53.868645 | orchestrator | Sunday 23 March 2025 13:41:08 +0000 (0:00:00.760) 0:00:53.896 ********** 2025-03-23 13:42:53.868650 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:42:53.868655 | orchestrator | 2025-03-23 13:42:53.868660 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-03-23 13:42:53.868665 | orchestrator | Sunday 23 March 2025 13:41:10 +0000 (0:00:01.310) 0:00:55.206 ********** 2025-03-23 13:42:53.868693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.868764 | orchestrator | 2025-03-23 13:42:53.868769 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-03-23 13:42:53.868774 | orchestrator | Sunday 23 March 2025 13:41:16 +0000 (0:00:06.056) 0:01:01.263 ********** 2025-03-23 13:42:53.868779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868803 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.868808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868827 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.868835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868858 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.868864 | orchestrator | 2025-03-23 13:42:53.868869 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-03-23 13:42:53.868874 | orchestrator | Sunday 23 March 2025 13:41:17 +0000 (0:00:00.979) 0:01:02.242 ********** 2025-03-23 13:42:53.868879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868901 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.868909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868928 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.868933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.868941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.868952 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.868957 | orchestrator | 2025-03-23 13:42:53.868962 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-03-23 13:42:53.868969 | orchestrator | Sunday 23 March 2025 13:41:18 +0000 (0:00:00.849) 0:01:03.092 ********** 2025-03-23 13:42:53.868975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.868996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869038 | orchestrator | 2025-03-23 13:42:53.869043 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-03-23 13:42:53.869048 | orchestrator | Sunday 23 March 2025 13:41:22 +0000 (0:00:03.999) 0:01:07.091 ********** 2025-03-23 13:42:53.869053 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869059 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:53.869064 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:53.869069 | orchestrator | 2025-03-23 13:42:53.869074 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-03-23 13:42:53.869079 | orchestrator | Sunday 23 March 2025 13:41:25 +0000 (0:00:03.331) 0:01:10.423 ********** 2025-03-23 13:42:53.869084 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:42:53.869089 | orchestrator | 2025-03-23 13:42:53.869094 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-03-23 13:42:53.869099 | orchestrator | Sunday 23 March 2025 13:41:27 +0000 (0:00:01.975) 0:01:12.399 ********** 2025-03-23 13:42:53.869104 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.869112 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.869117 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.869122 | orchestrator | 2025-03-23 13:42:53.869128 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-03-23 13:42:53.869134 | orchestrator | Sunday 23 March 2025 13:41:30 +0000 (0:00:03.417) 0:01:15.816 ********** 2025-03-23 13:42:53.869144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869206 | orchestrator | 2025-03-23 13:42:53.869211 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-03-23 13:42:53.869217 | orchestrator | Sunday 23 March 2025 13:41:48 +0000 (0:00:17.485) 0:01:33.302 ********** 2025-03-23 13:42:53.869225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.869237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869249 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.869255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.869264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869279 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.869288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 13:42:53.869294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:42:53.869304 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.869309 | orchestrator | 2025-03-23 13:42:53.869315 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-03-23 13:42:53.869323 | orchestrator | Sunday 23 March 2025 13:41:52 +0000 (0:00:03.948) 0:01:37.250 ********** 2025-03-23 13:42:53.869332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 13:42:53.869356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:42:53.869402 | orchestrator | 2025-03-23 13:42:53.869407 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-23 13:42:53.869412 | orchestrator | Sunday 23 March 2025 13:41:57 +0000 (0:00:05.239) 0:01:42.490 ********** 2025-03-23 13:42:53.869417 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:42:53.869422 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:42:53.869427 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:42:53.869432 | orchestrator | 2025-03-23 13:42:53.869438 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-03-23 13:42:53.869443 | orchestrator | Sunday 23 March 2025 13:41:58 +0000 (0:00:00.651) 0:01:43.141 ********** 2025-03-23 13:42:53.869448 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869452 | orchestrator | 2025-03-23 13:42:53.869458 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-03-23 13:42:53.869462 | orchestrator | Sunday 23 March 2025 13:42:01 +0000 (0:00:03.109) 0:01:46.251 ********** 2025-03-23 13:42:53.869467 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869472 | orchestrator | 2025-03-23 13:42:53.869477 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-03-23 13:42:53.869482 | orchestrator | Sunday 23 March 2025 13:42:03 +0000 (0:00:02.655) 0:01:48.907 ********** 2025-03-23 13:42:53.869487 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869492 | orchestrator | 2025-03-23 13:42:53.869497 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-23 13:42:53.869502 | orchestrator | Sunday 23 March 2025 13:42:17 +0000 (0:00:13.315) 0:02:02.222 ********** 2025-03-23 13:42:53.869507 | orchestrator | 2025-03-23 13:42:53.869512 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-23 13:42:53.869517 | orchestrator | Sunday 23 March 2025 13:42:17 +0000 (0:00:00.185) 0:02:02.408 ********** 2025-03-23 13:42:53.869525 | orchestrator | 2025-03-23 13:42:53.869530 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-23 13:42:53.869535 | orchestrator | Sunday 23 March 2025 13:42:17 +0000 (0:00:00.368) 0:02:02.777 ********** 2025-03-23 13:42:53.869540 | orchestrator | 2025-03-23 13:42:53.869545 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-03-23 13:42:53.869550 | orchestrator | Sunday 23 March 2025 13:42:17 +0000 (0:00:00.082) 0:02:02.860 ********** 2025-03-23 13:42:53.869555 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869560 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:53.869565 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:53.869570 | orchestrator | 2025-03-23 13:42:53.869575 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-03-23 13:42:53.869580 | orchestrator | Sunday 23 March 2025 13:42:27 +0000 (0:00:09.885) 0:02:12.746 ********** 2025-03-23 13:42:53.869585 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:53.869590 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:53.869595 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869600 | orchestrator | 2025-03-23 13:42:53.869605 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-03-23 13:42:53.869610 | orchestrator | Sunday 23 March 2025 13:42:39 +0000 (0:00:11.352) 0:02:24.099 ********** 2025-03-23 13:42:53.869615 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:42:53.869620 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:42:53.869625 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:42:53.869630 | orchestrator | 2025-03-23 13:42:53.869635 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:42:53.869640 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:42:53.869646 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:42:53.869651 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:42:53.869656 | orchestrator | 2025-03-23 13:42:53.869661 | orchestrator | 2025-03-23 13:42:53.869677 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:42:53.869683 | orchestrator | Sunday 23 March 2025 13:42:52 +0000 (0:00:13.591) 0:02:37.690 ********** 2025-03-23 13:42:53.869688 | orchestrator | =============================================================================== 2025-03-23 13:42:53.869693 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.95s 2025-03-23 13:42:53.869698 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 17.49s 2025-03-23 13:42:53.869703 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.59s 2025-03-23 13:42:53.869708 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.32s 2025-03-23 13:42:53.869713 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.36s 2025-03-23 13:42:53.869718 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.89s 2025-03-23 13:42:53.869723 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.67s 2025-03-23 13:42:53.869733 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 6.06s 2025-03-23 13:42:56.898323 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.63s 2025-03-23 13:42:56.898440 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.24s 2025-03-23 13:42:56.898459 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.43s 2025-03-23 13:42:56.898475 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.19s 2025-03-23 13:42:56.898489 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.00s 2025-03-23 13:42:56.898530 | orchestrator | barbican : Copying over existing policy file ---------------------------- 3.95s 2025-03-23 13:42:56.898545 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.80s 2025-03-23 13:42:56.898560 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 3.42s 2025-03-23 13:42:56.898574 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.33s 2025-03-23 13:42:56.898589 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.21s 2025-03-23 13:42:56.898604 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.20s 2025-03-23 13:42:56.898619 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.11s 2025-03-23 13:42:56.898634 | orchestrator | 2025-03-23 13:42:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:56.898713 | orchestrator | 2025-03-23 13:42:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:56.899899 | orchestrator | 2025-03-23 13:42:56 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:56.900573 | orchestrator | 2025-03-23 13:42:56 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:56.901379 | orchestrator | 2025-03-23 13:42:56 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:56.903531 | orchestrator | 2025-03-23 13:42:56 | INFO  | Task 2bcb19c3-a5e6-414d-90f4-431cbacdb05b is in state STARTED 2025-03-23 13:42:59.947427 | orchestrator | 2025-03-23 13:42:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:42:59.947550 | orchestrator | 2025-03-23 13:42:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:42:59.947932 | orchestrator | 2025-03-23 13:42:59 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:42:59.950551 | orchestrator | 2025-03-23 13:42:59 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:42:59.951359 | orchestrator | 2025-03-23 13:42:59 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:42:59.952047 | orchestrator | 2025-03-23 13:42:59 | INFO  | Task 2bcb19c3-a5e6-414d-90f4-431cbacdb05b is in state STARTED 2025-03-23 13:42:59.952177 | orchestrator | 2025-03-23 13:42:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:02.993475 | orchestrator | 2025-03-23 13:43:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:02.994131 | orchestrator | 2025-03-23 13:43:02 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:02.994178 | orchestrator | 2025-03-23 13:43:02 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:02.994741 | orchestrator | 2025-03-23 13:43:02 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:02.995346 | orchestrator | 2025-03-23 13:43:02 | INFO  | Task 2bcb19c3-a5e6-414d-90f4-431cbacdb05b is in state STARTED 2025-03-23 13:43:02.995868 | orchestrator | 2025-03-23 13:43:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:06.049013 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:06.049138 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:06.049863 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:06.050533 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:06.051175 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:06.051724 | orchestrator | 2025-03-23 13:43:06 | INFO  | Task 2bcb19c3-a5e6-414d-90f4-431cbacdb05b is in state SUCCESS 2025-03-23 13:43:09.089992 | orchestrator | 2025-03-23 13:43:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:09.090152 | orchestrator | 2025-03-23 13:43:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:09.091208 | orchestrator | 2025-03-23 13:43:09 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:09.093049 | orchestrator | 2025-03-23 13:43:09 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:09.096811 | orchestrator | 2025-03-23 13:43:09 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:09.098218 | orchestrator | 2025-03-23 13:43:09 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:09.098358 | orchestrator | 2025-03-23 13:43:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:12.137315 | orchestrator | 2025-03-23 13:43:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:12.138797 | orchestrator | 2025-03-23 13:43:12 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:12.139696 | orchestrator | 2025-03-23 13:43:12 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:12.140752 | orchestrator | 2025-03-23 13:43:12 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:12.141993 | orchestrator | 2025-03-23 13:43:12 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:15.197713 | orchestrator | 2025-03-23 13:43:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:15.197852 | orchestrator | 2025-03-23 13:43:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:15.198166 | orchestrator | 2025-03-23 13:43:15 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:15.198954 | orchestrator | 2025-03-23 13:43:15 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:15.199986 | orchestrator | 2025-03-23 13:43:15 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:15.200764 | orchestrator | 2025-03-23 13:43:15 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:18.233612 | orchestrator | 2025-03-23 13:43:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:18.233763 | orchestrator | 2025-03-23 13:43:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:18.234146 | orchestrator | 2025-03-23 13:43:18 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:18.234517 | orchestrator | 2025-03-23 13:43:18 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:18.235185 | orchestrator | 2025-03-23 13:43:18 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:18.235845 | orchestrator | 2025-03-23 13:43:18 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:21.300454 | orchestrator | 2025-03-23 13:43:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:21.300586 | orchestrator | 2025-03-23 13:43:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:21.302486 | orchestrator | 2025-03-23 13:43:21 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:21.302517 | orchestrator | 2025-03-23 13:43:21 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:21.302539 | orchestrator | 2025-03-23 13:43:21 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:21.303050 | orchestrator | 2025-03-23 13:43:21 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:21.303155 | orchestrator | 2025-03-23 13:43:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:24.328764 | orchestrator | 2025-03-23 13:43:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:24.329495 | orchestrator | 2025-03-23 13:43:24 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:24.331349 | orchestrator | 2025-03-23 13:43:24 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:24.332025 | orchestrator | 2025-03-23 13:43:24 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:24.332792 | orchestrator | 2025-03-23 13:43:24 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:27.355439 | orchestrator | 2025-03-23 13:43:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:27.355570 | orchestrator | 2025-03-23 13:43:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:27.355901 | orchestrator | 2025-03-23 13:43:27 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:27.356473 | orchestrator | 2025-03-23 13:43:27 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:27.357257 | orchestrator | 2025-03-23 13:43:27 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:27.358167 | orchestrator | 2025-03-23 13:43:27 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:30.417187 | orchestrator | 2025-03-23 13:43:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:30.417296 | orchestrator | 2025-03-23 13:43:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:30.422631 | orchestrator | 2025-03-23 13:43:30 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:30.425442 | orchestrator | 2025-03-23 13:43:30 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:30.426614 | orchestrator | 2025-03-23 13:43:30 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:30.427882 | orchestrator | 2025-03-23 13:43:30 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:30.428097 | orchestrator | 2025-03-23 13:43:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:33.479269 | orchestrator | 2025-03-23 13:43:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:33.479907 | orchestrator | 2025-03-23 13:43:33 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:33.480931 | orchestrator | 2025-03-23 13:43:33 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:33.481959 | orchestrator | 2025-03-23 13:43:33 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:33.483136 | orchestrator | 2025-03-23 13:43:33 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:33.483306 | orchestrator | 2025-03-23 13:43:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:36.535227 | orchestrator | 2025-03-23 13:43:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:36.535663 | orchestrator | 2025-03-23 13:43:36 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:36.536743 | orchestrator | 2025-03-23 13:43:36 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:36.538070 | orchestrator | 2025-03-23 13:43:36 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:36.539008 | orchestrator | 2025-03-23 13:43:36 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:36.539121 | orchestrator | 2025-03-23 13:43:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:39.569823 | orchestrator | 2025-03-23 13:43:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:39.570331 | orchestrator | 2025-03-23 13:43:39 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:39.571397 | orchestrator | 2025-03-23 13:43:39 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:39.572398 | orchestrator | 2025-03-23 13:43:39 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:39.574352 | orchestrator | 2025-03-23 13:43:39 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:42.609722 | orchestrator | 2025-03-23 13:43:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:42.609848 | orchestrator | 2025-03-23 13:43:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:42.610306 | orchestrator | 2025-03-23 13:43:42 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:42.612481 | orchestrator | 2025-03-23 13:43:42 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:42.613088 | orchestrator | 2025-03-23 13:43:42 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:42.613130 | orchestrator | 2025-03-23 13:43:42 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:45.661111 | orchestrator | 2025-03-23 13:43:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:45.661230 | orchestrator | 2025-03-23 13:43:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:45.662484 | orchestrator | 2025-03-23 13:43:45 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:45.663152 | orchestrator | 2025-03-23 13:43:45 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:45.664075 | orchestrator | 2025-03-23 13:43:45 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:45.666470 | orchestrator | 2025-03-23 13:43:45 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:48.716421 | orchestrator | 2025-03-23 13:43:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:48.716540 | orchestrator | 2025-03-23 13:43:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:48.717147 | orchestrator | 2025-03-23 13:43:48 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:48.717184 | orchestrator | 2025-03-23 13:43:48 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:48.717928 | orchestrator | 2025-03-23 13:43:48 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:48.718383 | orchestrator | 2025-03-23 13:43:48 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:51.771901 | orchestrator | 2025-03-23 13:43:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:51.772012 | orchestrator | 2025-03-23 13:43:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:51.773322 | orchestrator | 2025-03-23 13:43:51 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:51.774896 | orchestrator | 2025-03-23 13:43:51 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:51.775098 | orchestrator | 2025-03-23 13:43:51 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:51.776744 | orchestrator | 2025-03-23 13:43:51 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:54.808982 | orchestrator | 2025-03-23 13:43:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:54.809108 | orchestrator | 2025-03-23 13:43:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:54.809429 | orchestrator | 2025-03-23 13:43:54 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:54.809460 | orchestrator | 2025-03-23 13:43:54 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:54.809842 | orchestrator | 2025-03-23 13:43:54 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:54.810582 | orchestrator | 2025-03-23 13:43:54 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:43:57.846838 | orchestrator | 2025-03-23 13:43:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:43:57.846963 | orchestrator | 2025-03-23 13:43:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:43:57.847436 | orchestrator | 2025-03-23 13:43:57 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:43:57.850509 | orchestrator | 2025-03-23 13:43:57 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:43:57.851278 | orchestrator | 2025-03-23 13:43:57 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:43:57.852142 | orchestrator | 2025-03-23 13:43:57 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:00.890919 | orchestrator | 2025-03-23 13:43:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:00.891155 | orchestrator | 2025-03-23 13:44:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:00.892068 | orchestrator | 2025-03-23 13:44:00 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:00.892120 | orchestrator | 2025-03-23 13:44:00 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:00.893455 | orchestrator | 2025-03-23 13:44:00 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:44:00.894261 | orchestrator | 2025-03-23 13:44:00 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:03.927352 | orchestrator | 2025-03-23 13:44:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:03.927489 | orchestrator | 2025-03-23 13:44:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:03.929509 | orchestrator | 2025-03-23 13:44:03 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:03.931183 | orchestrator | 2025-03-23 13:44:03 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:03.932863 | orchestrator | 2025-03-23 13:44:03 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:44:03.933580 | orchestrator | 2025-03-23 13:44:03 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:03.933844 | orchestrator | 2025-03-23 13:44:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:06.996305 | orchestrator | 2025-03-23 13:44:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:06.997746 | orchestrator | 2025-03-23 13:44:06 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:06.998817 | orchestrator | 2025-03-23 13:44:06 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:07.000764 | orchestrator | 2025-03-23 13:44:07 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:44:07.001726 | orchestrator | 2025-03-23 13:44:07 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:07.003382 | orchestrator | 2025-03-23 13:44:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:10.059269 | orchestrator | 2025-03-23 13:44:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:10.061396 | orchestrator | 2025-03-23 13:44:10 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:10.063238 | orchestrator | 2025-03-23 13:44:10 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:10.065117 | orchestrator | 2025-03-23 13:44:10 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state STARTED 2025-03-23 13:44:10.068445 | orchestrator | 2025-03-23 13:44:10 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:10.068550 | orchestrator | 2025-03-23 13:44:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:13.123941 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:13.124249 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:13.124290 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:13.125856 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task 892cf218-0f6d-4305-ac2a-4861c0c8ce53 is in state SUCCESS 2025-03-23 13:44:13.127826 | orchestrator | 2025-03-23 13:44:13.127871 | orchestrator | 2025-03-23 13:44:13.127888 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:44:13.127903 | orchestrator | 2025-03-23 13:44:13.127917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:44:13.127931 | orchestrator | Sunday 23 March 2025 13:42:58 +0000 (0:00:00.390) 0:00:00.390 ********** 2025-03-23 13:44:13.127945 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:44:13.127961 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:44:13.127975 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:44:13.127989 | orchestrator | 2025-03-23 13:44:13.128003 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:44:13.128017 | orchestrator | Sunday 23 March 2025 13:42:59 +0000 (0:00:01.198) 0:00:01.588 ********** 2025-03-23 13:44:13.128030 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-03-23 13:44:13.128045 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-03-23 13:44:13.128059 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-03-23 13:44:13.128097 | orchestrator | 2025-03-23 13:44:13.128111 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-03-23 13:44:13.128125 | orchestrator | 2025-03-23 13:44:13.128139 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-03-23 13:44:13.128153 | orchestrator | Sunday 23 March 2025 13:43:01 +0000 (0:00:02.060) 0:00:03.648 ********** 2025-03-23 13:44:13.128166 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:44:13.128181 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:44:13.128195 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:44:13.128209 | orchestrator | 2025-03-23 13:44:13.128222 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:44:13.128733 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:44:13.128772 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:44:13.128787 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:44:13.128801 | orchestrator | 2025-03-23 13:44:13.128815 | orchestrator | 2025-03-23 13:44:13.128829 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:44:13.128843 | orchestrator | Sunday 23 March 2025 13:43:02 +0000 (0:00:01.227) 0:00:04.875 ********** 2025-03-23 13:44:13.128857 | orchestrator | =============================================================================== 2025-03-23 13:44:13.128871 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.06s 2025-03-23 13:44:13.128884 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.23s 2025-03-23 13:44:13.128898 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2025-03-23 13:44:13.128911 | orchestrator | 2025-03-23 13:44:13.128925 | orchestrator | 2025-03-23 13:44:13.128939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:44:13.128952 | orchestrator | 2025-03-23 13:44:13.128966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:44:13.128980 | orchestrator | Sunday 23 March 2025 13:40:15 +0000 (0:00:00.425) 0:00:00.425 ********** 2025-03-23 13:44:13.128993 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:44:13.129008 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:44:13.129031 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:44:13.129047 | orchestrator | 2025-03-23 13:44:13.129065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:44:13.129080 | orchestrator | Sunday 23 March 2025 13:40:15 +0000 (0:00:00.660) 0:00:01.086 ********** 2025-03-23 13:44:13.129095 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-03-23 13:44:13.129197 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-03-23 13:44:13.129224 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-03-23 13:44:13.129239 | orchestrator | 2025-03-23 13:44:13.129253 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-03-23 13:44:13.129268 | orchestrator | 2025-03-23 13:44:13.129282 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-23 13:44:13.129733 | orchestrator | Sunday 23 March 2025 13:40:16 +0000 (0:00:00.563) 0:00:01.649 ********** 2025-03-23 13:44:13.129859 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:44:13.130011 | orchestrator | 2025-03-23 13:44:13.130099 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-03-23 13:44:13.130127 | orchestrator | Sunday 23 March 2025 13:40:17 +0000 (0:00:01.403) 0:00:03.053 ********** 2025-03-23 13:44:13.130151 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-03-23 13:44:13.130167 | orchestrator | 2025-03-23 13:44:13.130181 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-03-23 13:44:13.130776 | orchestrator | Sunday 23 March 2025 13:40:21 +0000 (0:00:04.098) 0:00:07.151 ********** 2025-03-23 13:44:13.130796 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-03-23 13:44:13.130811 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-03-23 13:44:13.130825 | orchestrator | 2025-03-23 13:44:13.130840 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-03-23 13:44:13.130854 | orchestrator | Sunday 23 March 2025 13:40:29 +0000 (0:00:07.771) 0:00:14.922 ********** 2025-03-23 13:44:13.130868 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:44:13.130883 | orchestrator | 2025-03-23 13:44:13.130897 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-03-23 13:44:13.130911 | orchestrator | Sunday 23 March 2025 13:40:33 +0000 (0:00:04.041) 0:00:18.964 ********** 2025-03-23 13:44:13.130970 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:44:13.130987 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-03-23 13:44:13.131001 | orchestrator | 2025-03-23 13:44:13.131016 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-03-23 13:44:13.131029 | orchestrator | Sunday 23 March 2025 13:40:38 +0000 (0:00:04.567) 0:00:23.531 ********** 2025-03-23 13:44:13.131043 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:44:13.131057 | orchestrator | 2025-03-23 13:44:13.131071 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-03-23 13:44:13.131084 | orchestrator | Sunday 23 March 2025 13:40:42 +0000 (0:00:03.801) 0:00:27.333 ********** 2025-03-23 13:44:13.131098 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-03-23 13:44:13.131112 | orchestrator | 2025-03-23 13:44:13.131126 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-03-23 13:44:13.131140 | orchestrator | Sunday 23 March 2025 13:40:46 +0000 (0:00:04.784) 0:00:32.117 ********** 2025-03-23 13:44:13.131156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.131176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.131200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.131224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.131576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.131611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.131627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.131642 | orchestrator | 2025-03-23 13:44:13.131658 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-03-23 13:44:13.131701 | orchestrator | Sunday 23 March 2025 13:40:50 +0000 (0:00:03.420) 0:00:35.538 ********** 2025-03-23 13:44:13.131717 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.131732 | orchestrator | 2025-03-23 13:44:13.131746 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-03-23 13:44:13.131760 | orchestrator | Sunday 23 March 2025 13:40:50 +0000 (0:00:00.137) 0:00:35.676 ********** 2025-03-23 13:44:13.131774 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.131788 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.131801 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.131815 | orchestrator | 2025-03-23 13:44:13.131829 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-23 13:44:13.131843 | orchestrator | Sunday 23 March 2025 13:40:50 +0000 (0:00:00.547) 0:00:36.224 ********** 2025-03-23 13:44:13.131857 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:44:13.131872 | orchestrator | 2025-03-23 13:44:13.131886 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-03-23 13:44:13.131900 | orchestrator | Sunday 23 March 2025 13:40:51 +0000 (0:00:00.709) 0:00:36.933 ********** 2025-03-23 13:44:13.131919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.131970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.131989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.132003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.132334 | orchestrator | 2025-03-23 13:44:13.132349 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-03-23 13:44:13.132363 | orchestrator | Sunday 23 March 2025 13:40:58 +0000 (0:00:06.819) 0:00:43.752 ********** 2025-03-23 13:44:13.132378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.132423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.132440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.132511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132525 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.132568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.132586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132656 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.132671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.132739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.132757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132828 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.132841 | orchestrator | 2025-03-23 13:44:13.132856 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-03-23 13:44:13.132869 | orchestrator | Sunday 23 March 2025 13:41:00 +0000 (0:00:02.468) 0:00:46.220 ********** 2025-03-23 13:44:13.132883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.132928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.132945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.132987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133016 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.133031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.133045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.133091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133164 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.133178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.133193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.133249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133315 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.133329 | orchestrator | 2025-03-23 13:44:13.133343 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-03-23 13:44:13.133362 | orchestrator | Sunday 23 March 2025 13:41:03 +0000 (0:00:02.613) 0:00:48.834 ********** 2025-03-23 13:44:13.133376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.133420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.133445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.133480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.133872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.133888 | orchestrator | 2025-03-23 13:44:13.133902 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-03-23 13:44:13.133916 | orchestrator | Sunday 23 March 2025 13:41:13 +0000 (0:00:10.044) 0:00:58.878 ********** 2025-03-23 13:44:13.133963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.133980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.133995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.134009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134450 | orchestrator | 2025-03-23 13:44:13.134463 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-03-23 13:44:13.134475 | orchestrator | Sunday 23 March 2025 13:41:45 +0000 (0:00:31.878) 0:01:30.756 ********** 2025-03-23 13:44:13.134488 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-23 13:44:13.134501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-23 13:44:13.134514 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-23 13:44:13.134526 | orchestrator | 2025-03-23 13:44:13.134538 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-03-23 13:44:13.134551 | orchestrator | Sunday 23 March 2025 13:41:59 +0000 (0:00:14.371) 0:01:45.128 ********** 2025-03-23 13:44:13.134563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-23 13:44:13.134580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-23 13:44:13.134599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-23 13:44:13.134611 | orchestrator | 2025-03-23 13:44:13.134623 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-03-23 13:44:13.134636 | orchestrator | Sunday 23 March 2025 13:42:06 +0000 (0:00:06.442) 0:01:51.570 ********** 2025-03-23 13:44:13.134657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.134698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.134713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.134726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.134985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.134999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135025 | orchestrator | 2025-03-23 13:44:13.135037 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-03-23 13:44:13.135050 | orchestrator | Sunday 23 March 2025 13:42:11 +0000 (0:00:05.437) 0:01:57.008 ********** 2025-03-23 13:44:13.135068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.135390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135481 | orchestrator | 2025-03-23 13:44:13.135495 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-23 13:44:13.135507 | orchestrator | Sunday 23 March 2025 13:42:15 +0000 (0:00:03.505) 0:02:00.514 ********** 2025-03-23 13:44:13.135520 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.135533 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.135545 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.135557 | orchestrator | 2025-03-23 13:44:13.135570 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-03-23 13:44:13.135582 | orchestrator | Sunday 23 March 2025 13:42:16 +0000 (0:00:01.062) 0:02:01.576 ********** 2025-03-23 13:44:13.135595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.135622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.135729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135781 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.135794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135855 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.135868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 13:44:13.135886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 13:44:13.135914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.135979 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.135992 | orchestrator | 2025-03-23 13:44:13.136004 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-03-23 13:44:13.136017 | orchestrator | Sunday 23 March 2025 13:42:18 +0000 (0:00:02.289) 0:02:03.865 ********** 2025-03-23 13:44:13.136035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.136063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.136077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 13:44:13.136090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.136319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.136363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-23 13:44:13.136382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 13:44:13.136395 | orchestrator | 2025-03-23 13:44:13.136407 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-23 13:44:13.136420 | orchestrator | Sunday 23 March 2025 13:42:26 +0000 (0:00:08.181) 0:02:12.047 ********** 2025-03-23 13:44:13.136433 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:44:13.136445 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:44:13.136457 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:44:13.136469 | orchestrator | 2025-03-23 13:44:13.136482 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-03-23 13:44:13.136494 | orchestrator | Sunday 23 March 2025 13:42:27 +0000 (0:00:01.041) 0:02:13.088 ********** 2025-03-23 13:44:13.136507 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-03-23 13:44:13.136519 | orchestrator | 2025-03-23 13:44:13.136532 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-03-23 13:44:13.136544 | orchestrator | Sunday 23 March 2025 13:42:30 +0000 (0:00:02.784) 0:02:15.873 ********** 2025-03-23 13:44:13.136556 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:44:13.136569 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-03-23 13:44:13.136581 | orchestrator | 2025-03-23 13:44:13.136594 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-03-23 13:44:13.136606 | orchestrator | Sunday 23 March 2025 13:42:33 +0000 (0:00:03.019) 0:02:18.892 ********** 2025-03-23 13:44:13.136618 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.136630 | orchestrator | 2025-03-23 13:44:13.136643 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-23 13:44:13.136655 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:15.508) 0:02:34.400 ********** 2025-03-23 13:44:13.136667 | orchestrator | 2025-03-23 13:44:13.136695 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-23 13:44:13.136708 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:00.160) 0:02:34.560 ********** 2025-03-23 13:44:13.136720 | orchestrator | 2025-03-23 13:44:13.136732 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-23 13:44:13.136745 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:00.140) 0:02:34.701 ********** 2025-03-23 13:44:13.136757 | orchestrator | 2025-03-23 13:44:13.136769 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-03-23 13:44:13.136785 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:00.148) 0:02:34.850 ********** 2025-03-23 13:44:13.136797 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.136816 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.136828 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.136840 | orchestrator | 2025-03-23 13:44:13.136853 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-03-23 13:44:13.136865 | orchestrator | Sunday 23 March 2025 13:42:58 +0000 (0:00:09.160) 0:02:44.010 ********** 2025-03-23 13:44:13.136877 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.136890 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.136902 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.136914 | orchestrator | 2025-03-23 13:44:13.136926 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-03-23 13:44:13.136938 | orchestrator | Sunday 23 March 2025 13:43:10 +0000 (0:00:11.644) 0:02:55.654 ********** 2025-03-23 13:44:13.136950 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.136962 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.136974 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.136987 | orchestrator | 2025-03-23 13:44:13.136999 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-03-23 13:44:13.137011 | orchestrator | Sunday 23 March 2025 13:43:21 +0000 (0:00:11.433) 0:03:07.088 ********** 2025-03-23 13:44:13.137023 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.137036 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.137048 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.137060 | orchestrator | 2025-03-23 13:44:13.137072 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-03-23 13:44:13.137084 | orchestrator | Sunday 23 March 2025 13:43:34 +0000 (0:00:12.737) 0:03:19.826 ********** 2025-03-23 13:44:13.137096 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.137109 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.137121 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.137138 | orchestrator | 2025-03-23 13:44:13.137150 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-03-23 13:44:13.137163 | orchestrator | Sunday 23 March 2025 13:43:49 +0000 (0:00:15.482) 0:03:35.309 ********** 2025-03-23 13:44:13.137175 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.137187 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:44:13.137200 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:44:13.137212 | orchestrator | 2025-03-23 13:44:13.137224 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-03-23 13:44:13.137244 | orchestrator | Sunday 23 March 2025 13:44:01 +0000 (0:00:11.315) 0:03:46.624 ********** 2025-03-23 13:44:13.137264 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:44:13.137284 | orchestrator | 2025-03-23 13:44:13.137302 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:44:13.137321 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:44:16.179176 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:44:16.179296 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 13:44:16.179316 | orchestrator | 2025-03-23 13:44:16.179333 | orchestrator | 2025-03-23 13:44:16.179348 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:44:16.179363 | orchestrator | Sunday 23 March 2025 13:44:08 +0000 (0:00:07.180) 0:03:53.804 ********** 2025-03-23 13:44:16.179378 | orchestrator | =============================================================================== 2025-03-23 13:44:16.179391 | orchestrator | designate : Copying over designate.conf -------------------------------- 31.88s 2025-03-23 13:44:16.179405 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.51s 2025-03-23 13:44:16.179419 | orchestrator | designate : Restart designate-mdns container --------------------------- 15.48s 2025-03-23 13:44:16.179459 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 14.37s 2025-03-23 13:44:16.179474 | orchestrator | designate : Restart designate-producer container ----------------------- 12.74s 2025-03-23 13:44:16.179488 | orchestrator | designate : Restart designate-api container ---------------------------- 11.64s 2025-03-23 13:44:16.179501 | orchestrator | designate : Restart designate-central container ------------------------ 11.43s 2025-03-23 13:44:16.179515 | orchestrator | designate : Restart designate-worker container ------------------------- 11.32s 2025-03-23 13:44:16.179529 | orchestrator | designate : Copying over config.json files for services ---------------- 10.04s 2025-03-23 13:44:16.179543 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.16s 2025-03-23 13:44:16.179556 | orchestrator | designate : Check designate containers ---------------------------------- 8.18s 2025-03-23 13:44:16.179570 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.77s 2025-03-23 13:44:16.179584 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.18s 2025-03-23 13:44:16.179598 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.82s 2025-03-23 13:44:16.179612 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.44s 2025-03-23 13:44:16.179640 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 5.44s 2025-03-23 13:44:16.179655 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.78s 2025-03-23 13:44:16.179669 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.57s 2025-03-23 13:44:16.179711 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.10s 2025-03-23 13:44:16.179727 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.04s 2025-03-23 13:44:16.179744 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:16.179765 | orchestrator | 2025-03-23 13:44:13 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:16.179782 | orchestrator | 2025-03-23 13:44:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:16.179814 | orchestrator | 2025-03-23 13:44:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:16.180475 | orchestrator | 2025-03-23 13:44:16 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:16.181625 | orchestrator | 2025-03-23 13:44:16 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:16.182816 | orchestrator | 2025-03-23 13:44:16 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:16.184368 | orchestrator | 2025-03-23 13:44:16 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:19.231364 | orchestrator | 2025-03-23 13:44:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:19.231484 | orchestrator | 2025-03-23 13:44:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:19.233242 | orchestrator | 2025-03-23 13:44:19 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:19.233277 | orchestrator | 2025-03-23 13:44:19 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:19.236661 | orchestrator | 2025-03-23 13:44:19 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:19.238594 | orchestrator | 2025-03-23 13:44:19 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:22.269010 | orchestrator | 2025-03-23 13:44:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:22.269169 | orchestrator | 2025-03-23 13:44:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:22.269793 | orchestrator | 2025-03-23 13:44:22 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:22.269848 | orchestrator | 2025-03-23 13:44:22 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:22.271155 | orchestrator | 2025-03-23 13:44:22 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:22.271733 | orchestrator | 2025-03-23 13:44:22 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:25.308059 | orchestrator | 2025-03-23 13:44:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:25.308186 | orchestrator | 2025-03-23 13:44:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:25.308731 | orchestrator | 2025-03-23 13:44:25 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:25.310302 | orchestrator | 2025-03-23 13:44:25 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:25.310869 | orchestrator | 2025-03-23 13:44:25 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:25.311518 | orchestrator | 2025-03-23 13:44:25 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:28.340163 | orchestrator | 2025-03-23 13:44:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:28.340283 | orchestrator | 2025-03-23 13:44:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:28.340785 | orchestrator | 2025-03-23 13:44:28 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:28.340823 | orchestrator | 2025-03-23 13:44:28 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:28.341579 | orchestrator | 2025-03-23 13:44:28 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:28.342170 | orchestrator | 2025-03-23 13:44:28 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:31.375056 | orchestrator | 2025-03-23 13:44:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:31.375330 | orchestrator | 2025-03-23 13:44:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:31.375832 | orchestrator | 2025-03-23 13:44:31 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:31.375869 | orchestrator | 2025-03-23 13:44:31 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:31.376740 | orchestrator | 2025-03-23 13:44:31 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:31.377313 | orchestrator | 2025-03-23 13:44:31 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:34.421028 | orchestrator | 2025-03-23 13:44:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:34.421125 | orchestrator | 2025-03-23 13:44:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:34.421867 | orchestrator | 2025-03-23 13:44:34 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:34.421880 | orchestrator | 2025-03-23 13:44:34 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:34.421892 | orchestrator | 2025-03-23 13:44:34 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:34.423802 | orchestrator | 2025-03-23 13:44:34 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:37.474409 | orchestrator | 2025-03-23 13:44:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:37.474544 | orchestrator | 2025-03-23 13:44:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:37.475088 | orchestrator | 2025-03-23 13:44:37 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:37.475123 | orchestrator | 2025-03-23 13:44:37 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:37.477564 | orchestrator | 2025-03-23 13:44:37 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:37.478247 | orchestrator | 2025-03-23 13:44:37 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:37.478452 | orchestrator | 2025-03-23 13:44:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:40.526363 | orchestrator | 2025-03-23 13:44:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:40.527146 | orchestrator | 2025-03-23 13:44:40 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:40.529326 | orchestrator | 2025-03-23 13:44:40 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:40.531368 | orchestrator | 2025-03-23 13:44:40 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:40.533064 | orchestrator | 2025-03-23 13:44:40 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:40.533179 | orchestrator | 2025-03-23 13:44:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:43.577901 | orchestrator | 2025-03-23 13:44:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:43.579230 | orchestrator | 2025-03-23 13:44:43 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:43.580547 | orchestrator | 2025-03-23 13:44:43 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:43.582353 | orchestrator | 2025-03-23 13:44:43 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:43.583473 | orchestrator | 2025-03-23 13:44:43 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:43.583613 | orchestrator | 2025-03-23 13:44:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:46.635186 | orchestrator | 2025-03-23 13:44:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:46.636439 | orchestrator | 2025-03-23 13:44:46 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:46.638428 | orchestrator | 2025-03-23 13:44:46 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:46.640223 | orchestrator | 2025-03-23 13:44:46 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:46.641374 | orchestrator | 2025-03-23 13:44:46 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:49.706554 | orchestrator | 2025-03-23 13:44:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:49.706718 | orchestrator | 2025-03-23 13:44:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:49.709515 | orchestrator | 2025-03-23 13:44:49 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:49.710146 | orchestrator | 2025-03-23 13:44:49 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:49.710206 | orchestrator | 2025-03-23 13:44:49 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:49.711817 | orchestrator | 2025-03-23 13:44:49 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:49.713242 | orchestrator | 2025-03-23 13:44:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:52.767895 | orchestrator | 2025-03-23 13:44:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:52.770175 | orchestrator | 2025-03-23 13:44:52 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:52.771569 | orchestrator | 2025-03-23 13:44:52 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:52.773479 | orchestrator | 2025-03-23 13:44:52 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:52.775434 | orchestrator | 2025-03-23 13:44:52 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:55.825450 | orchestrator | 2025-03-23 13:44:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:55.825569 | orchestrator | 2025-03-23 13:44:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:55.826100 | orchestrator | 2025-03-23 13:44:55 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:55.826956 | orchestrator | 2025-03-23 13:44:55 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:55.828217 | orchestrator | 2025-03-23 13:44:55 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:55.830634 | orchestrator | 2025-03-23 13:44:55 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:44:55.830786 | orchestrator | 2025-03-23 13:44:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:44:58.865305 | orchestrator | 2025-03-23 13:44:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:44:58.865722 | orchestrator | 2025-03-23 13:44:58 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:44:58.866581 | orchestrator | 2025-03-23 13:44:58 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:44:58.867042 | orchestrator | 2025-03-23 13:44:58 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:44:58.867835 | orchestrator | 2025-03-23 13:44:58 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:45:01.909519 | orchestrator | 2025-03-23 13:44:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:01.909692 | orchestrator | 2025-03-23 13:45:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:01.909965 | orchestrator | 2025-03-23 13:45:01 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:01.911577 | orchestrator | 2025-03-23 13:45:01 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:01.912508 | orchestrator | 2025-03-23 13:45:01 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:01.913432 | orchestrator | 2025-03-23 13:45:01 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:45:01.914073 | orchestrator | 2025-03-23 13:45:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:04.946774 | orchestrator | 2025-03-23 13:45:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:04.947579 | orchestrator | 2025-03-23 13:45:04 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:04.950442 | orchestrator | 2025-03-23 13:45:04 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:04.951749 | orchestrator | 2025-03-23 13:45:04 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:04.952846 | orchestrator | 2025-03-23 13:45:04 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:45:08.006489 | orchestrator | 2025-03-23 13:45:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:08.006628 | orchestrator | 2025-03-23 13:45:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:08.011517 | orchestrator | 2025-03-23 13:45:08 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:08.013457 | orchestrator | 2025-03-23 13:45:08 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:08.014329 | orchestrator | 2025-03-23 13:45:08 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:08.015766 | orchestrator | 2025-03-23 13:45:08 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state STARTED 2025-03-23 13:45:11.061302 | orchestrator | 2025-03-23 13:45:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:11.061444 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:11.062739 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:11.064636 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:11.066579 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:11.069490 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:11.072282 | orchestrator | 2025-03-23 13:45:11 | INFO  | Task 0141c789-2565-4da2-9371-98db29bbbded is in state SUCCESS 2025-03-23 13:45:14.131475 | orchestrator | 2025-03-23 13:45:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:14.131592 | orchestrator | 2025-03-23 13:45:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:14.137154 | orchestrator | 2025-03-23 13:45:14 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:14.140201 | orchestrator | 2025-03-23 13:45:14 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:14.142689 | orchestrator | 2025-03-23 13:45:14 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:14.144522 | orchestrator | 2025-03-23 13:45:14 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:17.182932 | orchestrator | 2025-03-23 13:45:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:17.183077 | orchestrator | 2025-03-23 13:45:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:17.184892 | orchestrator | 2025-03-23 13:45:17 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:17.187509 | orchestrator | 2025-03-23 13:45:17 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:17.189728 | orchestrator | 2025-03-23 13:45:17 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:17.196606 | orchestrator | 2025-03-23 13:45:17 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:17.196870 | orchestrator | 2025-03-23 13:45:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:20.233501 | orchestrator | 2025-03-23 13:45:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:20.234110 | orchestrator | 2025-03-23 13:45:20 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:20.234854 | orchestrator | 2025-03-23 13:45:20 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:20.236008 | orchestrator | 2025-03-23 13:45:20 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:20.237679 | orchestrator | 2025-03-23 13:45:20 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:20.238557 | orchestrator | 2025-03-23 13:45:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:23.291767 | orchestrator | 2025-03-23 13:45:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:23.294883 | orchestrator | 2025-03-23 13:45:23 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:23.294927 | orchestrator | 2025-03-23 13:45:23 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state STARTED 2025-03-23 13:45:23.295478 | orchestrator | 2025-03-23 13:45:23 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:23.296085 | orchestrator | 2025-03-23 13:45:23 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:26.341531 | orchestrator | 2025-03-23 13:45:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:26.341701 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:26.342519 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:26.342557 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task acfa008c-dc31-42ed-b08a-aec891975370 is in state SUCCESS 2025-03-23 13:45:26.342880 | orchestrator | 2025-03-23 13:45:26.342909 | orchestrator | 2025-03-23 13:45:26.342923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:45:26.343017 | orchestrator | 2025-03-23 13:45:26.343064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:45:26.343080 | orchestrator | Sunday 23 March 2025 13:44:27 +0000 (0:00:01.399) 0:00:01.399 ********** 2025-03-23 13:45:26.343094 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:45:26.343109 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:45:26.343122 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:45:26.343908 | orchestrator | ok: [testbed-manager] 2025-03-23 13:45:26.343941 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:45:26.343957 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:45:26.343972 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:45:26.343987 | orchestrator | 2025-03-23 13:45:26.344003 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:45:26.344018 | orchestrator | Sunday 23 March 2025 13:44:30 +0000 (0:00:03.491) 0:00:04.890 ********** 2025-03-23 13:45:26.344033 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344049 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344064 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344079 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344094 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344109 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344149 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-03-23 13:45:26.344165 | orchestrator | 2025-03-23 13:45:26.344180 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-03-23 13:45:26.344195 | orchestrator | 2025-03-23 13:45:26.344210 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-03-23 13:45:26.344225 | orchestrator | Sunday 23 March 2025 13:44:32 +0000 (0:00:02.036) 0:00:06.926 ********** 2025-03-23 13:45:26.344241 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:45:26.344257 | orchestrator | 2025-03-23 13:45:26.344272 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-03-23 13:45:26.344287 | orchestrator | Sunday 23 March 2025 13:44:37 +0000 (0:00:04.577) 0:00:11.504 ********** 2025-03-23 13:45:26.344302 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-03-23 13:45:26.344317 | orchestrator | 2025-03-23 13:45:26.344332 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-03-23 13:45:26.344347 | orchestrator | Sunday 23 March 2025 13:44:42 +0000 (0:00:04.824) 0:00:16.329 ********** 2025-03-23 13:45:26.344363 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-03-23 13:45:26.344380 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-03-23 13:45:26.344395 | orchestrator | 2025-03-23 13:45:26.344410 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-03-23 13:45:26.344425 | orchestrator | Sunday 23 March 2025 13:44:49 +0000 (0:00:07.690) 0:00:24.019 ********** 2025-03-23 13:45:26.344439 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-03-23 13:45:26.344455 | orchestrator | 2025-03-23 13:45:26.344470 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-03-23 13:45:26.344485 | orchestrator | Sunday 23 March 2025 13:44:53 +0000 (0:00:03.652) 0:00:27.672 ********** 2025-03-23 13:45:26.344500 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:45:26.344515 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-03-23 13:45:26.344530 | orchestrator | 2025-03-23 13:45:26.344545 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-03-23 13:45:26.344562 | orchestrator | Sunday 23 March 2025 13:44:57 +0000 (0:00:03.776) 0:00:31.449 ********** 2025-03-23 13:45:26.344578 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-03-23 13:45:26.344595 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-03-23 13:45:26.344611 | orchestrator | 2025-03-23 13:45:26.344627 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-03-23 13:45:26.344661 | orchestrator | Sunday 23 March 2025 13:45:03 +0000 (0:00:06.387) 0:00:37.836 ********** 2025-03-23 13:45:26.344677 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-03-23 13:45:26.344692 | orchestrator | 2025-03-23 13:45:26.344708 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:45:26.344724 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344739 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344755 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344771 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344787 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344830 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344846 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:45:26.344862 | orchestrator | 2025-03-23 13:45:26.344877 | orchestrator | 2025-03-23 13:45:26.344893 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:45:26.344908 | orchestrator | Sunday 23 March 2025 13:45:08 +0000 (0:00:04.750) 0:00:42.587 ********** 2025-03-23 13:45:26.344922 | orchestrator | =============================================================================== 2025-03-23 13:45:26.344937 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.69s 2025-03-23 13:45:26.344951 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.39s 2025-03-23 13:45:26.344973 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.82s 2025-03-23 13:45:26.344987 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.75s 2025-03-23 13:45:26.345001 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 4.58s 2025-03-23 13:45:26.345015 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.78s 2025-03-23 13:45:26.345029 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.65s 2025-03-23 13:45:26.345043 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.49s 2025-03-23 13:45:26.345057 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.04s 2025-03-23 13:45:26.345071 | orchestrator | 2025-03-23 13:45:26.345085 | orchestrator | 2025-03-23 13:45:26.345099 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:45:26.345113 | orchestrator | 2025-03-23 13:45:26.345127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:45:26.345140 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:00.940) 0:00:00.940 ********** 2025-03-23 13:45:26.345154 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:45:26.345168 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:45:26.345182 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:45:26.345202 | orchestrator | 2025-03-23 13:45:26.345216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:45:26.345230 | orchestrator | Sunday 23 March 2025 13:42:49 +0000 (0:00:00.442) 0:00:01.382 ********** 2025-03-23 13:45:26.345244 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-03-23 13:45:26.345258 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-03-23 13:45:26.345272 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-03-23 13:45:26.345286 | orchestrator | 2025-03-23 13:45:26.345300 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-03-23 13:45:26.345314 | orchestrator | 2025-03-23 13:45:26.345328 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-23 13:45:26.345342 | orchestrator | Sunday 23 March 2025 13:42:50 +0000 (0:00:00.506) 0:00:01.888 ********** 2025-03-23 13:45:26.345356 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:45:26.345370 | orchestrator | 2025-03-23 13:45:26.345385 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-03-23 13:45:26.345399 | orchestrator | Sunday 23 March 2025 13:42:51 +0000 (0:00:01.370) 0:00:03.259 ********** 2025-03-23 13:45:26.345413 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-03-23 13:45:26.345426 | orchestrator | 2025-03-23 13:45:26.345440 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-03-23 13:45:26.345454 | orchestrator | Sunday 23 March 2025 13:42:55 +0000 (0:00:04.179) 0:00:07.438 ********** 2025-03-23 13:45:26.345468 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-03-23 13:45:26.345490 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-03-23 13:45:26.345504 | orchestrator | 2025-03-23 13:45:26.345518 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-03-23 13:45:26.345538 | orchestrator | Sunday 23 March 2025 13:43:03 +0000 (0:00:07.481) 0:00:14.920 ********** 2025-03-23 13:45:26.345553 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:45:26.345567 | orchestrator | 2025-03-23 13:45:26.345581 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-03-23 13:45:26.345595 | orchestrator | Sunday 23 March 2025 13:43:07 +0000 (0:00:04.009) 0:00:18.930 ********** 2025-03-23 13:45:26.345609 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:45:26.345623 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-03-23 13:45:26.345665 | orchestrator | 2025-03-23 13:45:26.345680 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-03-23 13:45:26.345695 | orchestrator | Sunday 23 March 2025 13:43:12 +0000 (0:00:04.740) 0:00:23.670 ********** 2025-03-23 13:45:26.345709 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:45:26.345723 | orchestrator | 2025-03-23 13:45:26.345737 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-03-23 13:45:26.345752 | orchestrator | Sunday 23 March 2025 13:43:16 +0000 (0:00:04.136) 0:00:27.806 ********** 2025-03-23 13:45:26.345766 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-03-23 13:45:26.345780 | orchestrator | 2025-03-23 13:45:26.345793 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-03-23 13:45:26.345808 | orchestrator | Sunday 23 March 2025 13:43:21 +0000 (0:00:05.018) 0:00:32.825 ********** 2025-03-23 13:45:26.345822 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.345836 | orchestrator | 2025-03-23 13:45:26.345850 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-03-23 13:45:26.345872 | orchestrator | Sunday 23 March 2025 13:43:25 +0000 (0:00:03.833) 0:00:36.659 ********** 2025-03-23 13:45:26.345887 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.345901 | orchestrator | 2025-03-23 13:45:26.345915 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-03-23 13:45:26.345942 | orchestrator | Sunday 23 March 2025 13:43:29 +0000 (0:00:04.732) 0:00:41.391 ********** 2025-03-23 13:45:26.345957 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.345970 | orchestrator | 2025-03-23 13:45:26.345984 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-03-23 13:45:26.345998 | orchestrator | Sunday 23 March 2025 13:43:34 +0000 (0:00:04.617) 0:00:46.009 ********** 2025-03-23 13:45:26.346065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.346089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.346112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.346129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.346154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.346325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.346351 | orchestrator | 2025-03-23 13:45:26.346366 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-03-23 13:45:26.346380 | orchestrator | Sunday 23 March 2025 13:43:38 +0000 (0:00:03.755) 0:00:49.764 ********** 2025-03-23 13:45:26.346403 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.346418 | orchestrator | 2025-03-23 13:45:26.346432 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-03-23 13:45:26.346446 | orchestrator | Sunday 23 March 2025 13:43:38 +0000 (0:00:00.357) 0:00:50.122 ********** 2025-03-23 13:45:26.346459 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.346473 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.346487 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.346501 | orchestrator | 2025-03-23 13:45:26.346515 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-03-23 13:45:26.346529 | orchestrator | Sunday 23 March 2025 13:43:39 +0000 (0:00:01.172) 0:00:51.294 ********** 2025-03-23 13:45:26.346543 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:45:26.346557 | orchestrator | 2025-03-23 13:45:26.346571 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-03-23 13:45:26.346585 | orchestrator | Sunday 23 March 2025 13:43:42 +0000 (0:00:02.507) 0:00:53.801 ********** 2025-03-23 13:45:26.346600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.346615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.346631 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.346678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.346694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.346717 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.346749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.346765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.346780 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.346794 | orchestrator | 2025-03-23 13:45:26.346808 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-03-23 13:45:26.346822 | orchestrator | Sunday 23 March 2025 13:43:46 +0000 (0:00:03.706) 0:00:57.507 ********** 2025-03-23 13:45:26.346836 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.346850 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.346864 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.346878 | orchestrator | 2025-03-23 13:45:26.346891 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-23 13:45:26.346905 | orchestrator | Sunday 23 March 2025 13:43:47 +0000 (0:00:01.115) 0:00:58.623 ********** 2025-03-23 13:45:26.346919 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:45:26.346934 | orchestrator | 2025-03-23 13:45:26.346948 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-03-23 13:45:26.346964 | orchestrator | Sunday 23 March 2025 13:43:50 +0000 (0:00:03.530) 0:01:02.154 ********** 2025-03-23 13:45:26.346987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347119 | orchestrator | 2025-03-23 13:45:26.347134 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-03-23 13:45:26.347150 | orchestrator | Sunday 23 March 2025 13:43:55 +0000 (0:00:05.205) 0:01:07.359 ********** 2025-03-23 13:45:26.347176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347208 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.347224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347269 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.347285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347327 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.347341 | orchestrator | 2025-03-23 13:45:26.347355 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-03-23 13:45:26.347369 | orchestrator | Sunday 23 March 2025 13:43:57 +0000 (0:00:01.739) 0:01:09.099 ********** 2025-03-23 13:45:26.347384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347413 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.347436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347472 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.347496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347527 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.347540 | orchestrator | 2025-03-23 13:45:26.347555 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-03-23 13:45:26.347569 | orchestrator | Sunday 23 March 2025 13:43:59 +0000 (0:00:01.833) 0:01:10.933 ********** 2025-03-23 13:45:26.347583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347725 | orchestrator | 2025-03-23 13:45:26.347751 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-03-23 13:45:26.347767 | orchestrator | Sunday 23 March 2025 13:44:03 +0000 (0:00:04.287) 0:01:15.220 ********** 2025-03-23 13:45:26.347792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.347837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.347907 | orchestrator | 2025-03-23 13:45:26.347921 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-03-23 13:45:26.347935 | orchestrator | Sunday 23 March 2025 13:44:25 +0000 (0:00:21.899) 0:01:37.119 ********** 2025-03-23 13:45:26.347950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.347964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.347978 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.348000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.348033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.348049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 13:45:26.348064 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.348078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:45:26.348092 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.348106 | orchestrator | 2025-03-23 13:45:26.348121 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-03-23 13:45:26.348135 | orchestrator | Sunday 23 March 2025 13:44:28 +0000 (0:00:02.769) 0:01:39.889 ********** 2025-03-23 13:45:26.348149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.348187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.348204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 13:45:26.348218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.348233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.348264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:45:26.348279 | orchestrator | 2025-03-23 13:45:26.348293 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-23 13:45:26.348307 | orchestrator | Sunday 23 March 2025 13:44:33 +0000 (0:00:05.051) 0:01:44.940 ********** 2025-03-23 13:45:26.348321 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:45:26.348336 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:45:26.348349 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:45:26.348363 | orchestrator | 2025-03-23 13:45:26.348377 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-03-23 13:45:26.348395 | orchestrator | Sunday 23 March 2025 13:44:34 +0000 (0:00:01.021) 0:01:45.961 ********** 2025-03-23 13:45:26.348409 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.348423 | orchestrator | 2025-03-23 13:45:26.348437 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-03-23 13:45:26.348451 | orchestrator | Sunday 23 March 2025 13:44:38 +0000 (0:00:03.623) 0:01:49.585 ********** 2025-03-23 13:45:26.348464 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.348478 | orchestrator | 2025-03-23 13:45:26.348492 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-03-23 13:45:26.348506 | orchestrator | Sunday 23 March 2025 13:44:40 +0000 (0:00:02.744) 0:01:52.329 ********** 2025-03-23 13:45:26.348520 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:26.348534 | orchestrator | 2025-03-23 13:45:26.348554 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-23 13:45:29.379402 | orchestrator | Sunday 23 March 2025 13:44:55 +0000 (0:00:15.113) 0:02:07.443 ********** 2025-03-23 13:45:29.379503 | orchestrator | 2025-03-23 13:45:29.379520 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-23 13:45:29.379534 | orchestrator | Sunday 23 March 2025 13:44:56 +0000 (0:00:00.072) 0:02:07.516 ********** 2025-03-23 13:45:29.379548 | orchestrator | 2025-03-23 13:45:29.379562 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-23 13:45:29.379577 | orchestrator | Sunday 23 March 2025 13:44:56 +0000 (0:00:00.143) 0:02:07.659 ********** 2025-03-23 13:45:29.379590 | orchestrator | 2025-03-23 13:45:29.379604 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-03-23 13:45:29.379618 | orchestrator | Sunday 23 March 2025 13:44:56 +0000 (0:00:00.056) 0:02:07.716 ********** 2025-03-23 13:45:29.379674 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:29.379692 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:45:29.379707 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:45:29.379835 | orchestrator | 2025-03-23 13:45:29.379852 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-03-23 13:45:29.379866 | orchestrator | Sunday 23 March 2025 13:45:14 +0000 (0:00:17.989) 0:02:25.706 ********** 2025-03-23 13:45:29.379880 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:45:29.379894 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:45:29.379908 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:45:29.379922 | orchestrator | 2025-03-23 13:45:29.379936 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:45:29.379952 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-23 13:45:29.379993 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:45:29.380008 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:45:29.380022 | orchestrator | 2025-03-23 13:45:29.380036 | orchestrator | 2025-03-23 13:45:29.380050 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:45:29.380064 | orchestrator | Sunday 23 March 2025 13:45:24 +0000 (0:00:09.891) 0:02:35.597 ********** 2025-03-23 13:45:29.380078 | orchestrator | =============================================================================== 2025-03-23 13:45:29.380092 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 21.90s 2025-03-23 13:45:29.380106 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.99s 2025-03-23 13:45:29.380120 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.11s 2025-03-23 13:45:29.380134 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.89s 2025-03-23 13:45:29.380148 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.48s 2025-03-23 13:45:29.380162 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 5.21s 2025-03-23 13:45:29.380176 | orchestrator | magnum : Check magnum containers ---------------------------------------- 5.05s 2025-03-23 13:45:29.380190 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 5.02s 2025-03-23 13:45:29.380203 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.74s 2025-03-23 13:45:29.380217 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.73s 2025-03-23 13:45:29.380231 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.62s 2025-03-23 13:45:29.380244 | orchestrator | magnum : Copying over config.json files for services -------------------- 4.29s 2025-03-23 13:45:29.380258 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.18s 2025-03-23 13:45:29.380272 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.14s 2025-03-23 13:45:29.380299 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.01s 2025-03-23 13:45:29.380313 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.83s 2025-03-23 13:45:29.380327 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.76s 2025-03-23 13:45:29.380340 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.71s 2025-03-23 13:45:29.380354 | orchestrator | magnum : Creating Magnum database --------------------------------------- 3.62s 2025-03-23 13:45:29.380368 | orchestrator | magnum : include_tasks -------------------------------------------------- 3.53s 2025-03-23 13:45:29.380382 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:29.380396 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:29.380411 | orchestrator | 2025-03-23 13:45:26 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:29.380431 | orchestrator | 2025-03-23 13:45:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:29.380462 | orchestrator | 2025-03-23 13:45:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:29.382390 | orchestrator | 2025-03-23 13:45:29 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:29.382422 | orchestrator | 2025-03-23 13:45:29 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:29.382959 | orchestrator | 2025-03-23 13:45:29 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:29.383920 | orchestrator | 2025-03-23 13:45:29 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:32.412827 | orchestrator | 2025-03-23 13:45:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:32.412953 | orchestrator | 2025-03-23 13:45:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:32.413822 | orchestrator | 2025-03-23 13:45:32 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:32.413856 | orchestrator | 2025-03-23 13:45:32 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:32.415934 | orchestrator | 2025-03-23 13:45:32 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:32.416990 | orchestrator | 2025-03-23 13:45:32 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:32.417260 | orchestrator | 2025-03-23 13:45:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:35.467065 | orchestrator | 2025-03-23 13:45:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:35.468424 | orchestrator | 2025-03-23 13:45:35 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:35.470148 | orchestrator | 2025-03-23 13:45:35 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:35.472157 | orchestrator | 2025-03-23 13:45:35 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:35.473758 | orchestrator | 2025-03-23 13:45:35 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:35.473929 | orchestrator | 2025-03-23 13:45:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:38.506478 | orchestrator | 2025-03-23 13:45:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:38.506953 | orchestrator | 2025-03-23 13:45:38 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:38.508544 | orchestrator | 2025-03-23 13:45:38 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:38.509416 | orchestrator | 2025-03-23 13:45:38 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:38.510273 | orchestrator | 2025-03-23 13:45:38 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:41.550203 | orchestrator | 2025-03-23 13:45:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:41.550330 | orchestrator | 2025-03-23 13:45:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:41.552615 | orchestrator | 2025-03-23 13:45:41 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:41.554689 | orchestrator | 2025-03-23 13:45:41 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:41.556295 | orchestrator | 2025-03-23 13:45:41 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:41.557865 | orchestrator | 2025-03-23 13:45:41 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:44.590234 | orchestrator | 2025-03-23 13:45:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:44.590359 | orchestrator | 2025-03-23 13:45:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:44.590672 | orchestrator | 2025-03-23 13:45:44 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:44.592115 | orchestrator | 2025-03-23 13:45:44 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:44.592729 | orchestrator | 2025-03-23 13:45:44 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:44.594105 | orchestrator | 2025-03-23 13:45:44 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:47.628221 | orchestrator | 2025-03-23 13:45:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:47.628366 | orchestrator | 2025-03-23 13:45:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:47.628581 | orchestrator | 2025-03-23 13:45:47 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:47.629388 | orchestrator | 2025-03-23 13:45:47 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:47.629802 | orchestrator | 2025-03-23 13:45:47 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:47.630749 | orchestrator | 2025-03-23 13:45:47 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:50.668787 | orchestrator | 2025-03-23 13:45:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:50.668900 | orchestrator | 2025-03-23 13:45:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:50.669455 | orchestrator | 2025-03-23 13:45:50 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:50.670080 | orchestrator | 2025-03-23 13:45:50 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:50.670773 | orchestrator | 2025-03-23 13:45:50 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:50.671494 | orchestrator | 2025-03-23 13:45:50 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:53.699563 | orchestrator | 2025-03-23 13:45:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:53.699719 | orchestrator | 2025-03-23 13:45:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:53.699867 | orchestrator | 2025-03-23 13:45:53 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:53.700763 | orchestrator | 2025-03-23 13:45:53 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:53.702183 | orchestrator | 2025-03-23 13:45:53 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:53.702356 | orchestrator | 2025-03-23 13:45:53 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:53.702716 | orchestrator | 2025-03-23 13:45:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:56.733890 | orchestrator | 2025-03-23 13:45:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:56.734869 | orchestrator | 2025-03-23 13:45:56 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:56.736999 | orchestrator | 2025-03-23 13:45:56 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:56.737491 | orchestrator | 2025-03-23 13:45:56 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:56.738420 | orchestrator | 2025-03-23 13:45:56 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:45:59.769918 | orchestrator | 2025-03-23 13:45:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:45:59.770125 | orchestrator | 2025-03-23 13:45:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:45:59.775801 | orchestrator | 2025-03-23 13:45:59 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:45:59.776420 | orchestrator | 2025-03-23 13:45:59 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:45:59.776994 | orchestrator | 2025-03-23 13:45:59 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:45:59.777719 | orchestrator | 2025-03-23 13:45:59 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:02.821035 | orchestrator | 2025-03-23 13:45:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:02.821177 | orchestrator | 2025-03-23 13:46:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:02.821671 | orchestrator | 2025-03-23 13:46:02 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:02.822996 | orchestrator | 2025-03-23 13:46:02 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:02.825902 | orchestrator | 2025-03-23 13:46:02 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:02.826980 | orchestrator | 2025-03-23 13:46:02 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:02.827307 | orchestrator | 2025-03-23 13:46:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:05.911753 | orchestrator | 2025-03-23 13:46:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:05.917680 | orchestrator | 2025-03-23 13:46:05 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:05.920918 | orchestrator | 2025-03-23 13:46:05 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:05.922127 | orchestrator | 2025-03-23 13:46:05 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:05.924217 | orchestrator | 2025-03-23 13:46:05 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:08.971248 | orchestrator | 2025-03-23 13:46:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:08.971384 | orchestrator | 2025-03-23 13:46:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:08.971798 | orchestrator | 2025-03-23 13:46:08 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:08.972444 | orchestrator | 2025-03-23 13:46:08 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:08.973013 | orchestrator | 2025-03-23 13:46:08 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:08.974099 | orchestrator | 2025-03-23 13:46:08 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:12.015654 | orchestrator | 2025-03-23 13:46:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:12.015781 | orchestrator | 2025-03-23 13:46:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:12.017071 | orchestrator | 2025-03-23 13:46:12 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:12.018814 | orchestrator | 2025-03-23 13:46:12 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:12.022328 | orchestrator | 2025-03-23 13:46:12 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:12.027710 | orchestrator | 2025-03-23 13:46:12 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:12.028465 | orchestrator | 2025-03-23 13:46:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:15.074469 | orchestrator | 2025-03-23 13:46:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:15.075283 | orchestrator | 2025-03-23 13:46:15 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:15.079181 | orchestrator | 2025-03-23 13:46:15 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:18.124390 | orchestrator | 2025-03-23 13:46:15 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:18.124476 | orchestrator | 2025-03-23 13:46:15 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:18.124493 | orchestrator | 2025-03-23 13:46:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:18.124543 | orchestrator | 2025-03-23 13:46:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:18.125463 | orchestrator | 2025-03-23 13:46:18 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:18.126387 | orchestrator | 2025-03-23 13:46:18 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:18.127823 | orchestrator | 2025-03-23 13:46:18 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:18.128893 | orchestrator | 2025-03-23 13:46:18 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:21.196450 | orchestrator | 2025-03-23 13:46:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:21.196568 | orchestrator | 2025-03-23 13:46:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:21.197414 | orchestrator | 2025-03-23 13:46:21 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:21.198598 | orchestrator | 2025-03-23 13:46:21 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:21.199508 | orchestrator | 2025-03-23 13:46:21 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:21.200476 | orchestrator | 2025-03-23 13:46:21 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:24.251235 | orchestrator | 2025-03-23 13:46:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:24.251375 | orchestrator | 2025-03-23 13:46:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:24.251669 | orchestrator | 2025-03-23 13:46:24 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:24.251706 | orchestrator | 2025-03-23 13:46:24 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:24.253491 | orchestrator | 2025-03-23 13:46:24 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:24.258753 | orchestrator | 2025-03-23 13:46:24 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:27.332316 | orchestrator | 2025-03-23 13:46:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:27.332429 | orchestrator | 2025-03-23 13:46:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:27.337346 | orchestrator | 2025-03-23 13:46:27 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:27.339235 | orchestrator | 2025-03-23 13:46:27 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:27.341549 | orchestrator | 2025-03-23 13:46:27 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:27.344666 | orchestrator | 2025-03-23 13:46:27 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:30.379203 | orchestrator | 2025-03-23 13:46:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:30.379325 | orchestrator | 2025-03-23 13:46:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:30.380076 | orchestrator | 2025-03-23 13:46:30 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:30.383756 | orchestrator | 2025-03-23 13:46:30 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:30.384646 | orchestrator | 2025-03-23 13:46:30 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:30.385521 | orchestrator | 2025-03-23 13:46:30 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:33.411332 | orchestrator | 2025-03-23 13:46:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:33.411452 | orchestrator | 2025-03-23 13:46:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:33.412022 | orchestrator | 2025-03-23 13:46:33 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:33.413477 | orchestrator | 2025-03-23 13:46:33 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:33.414128 | orchestrator | 2025-03-23 13:46:33 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:33.415158 | orchestrator | 2025-03-23 13:46:33 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:33.417042 | orchestrator | 2025-03-23 13:46:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:36.460370 | orchestrator | 2025-03-23 13:46:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:36.461788 | orchestrator | 2025-03-23 13:46:36 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:36.462462 | orchestrator | 2025-03-23 13:46:36 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state STARTED 2025-03-23 13:46:36.463182 | orchestrator | 2025-03-23 13:46:36 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:36.464736 | orchestrator | 2025-03-23 13:46:36 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:36.465044 | orchestrator | 2025-03-23 13:46:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:39.493474 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:39.494515 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:39.496277 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:39.509724 | orchestrator | 2025-03-23 13:46:39.509766 | orchestrator | 2025-03-23 13:46:39.509782 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:46:39.509798 | orchestrator | 2025-03-23 13:46:39.509813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:46:39.509827 | orchestrator | Sunday 23 March 2025 13:40:15 +0000 (0:00:00.461) 0:00:00.462 ********** 2025-03-23 13:46:39.509842 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:46:39.509859 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:46:39.509898 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:46:39.509914 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:46:39.509928 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:46:39.509943 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:46:39.509958 | orchestrator | 2025-03-23 13:46:39.509973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:46:39.509987 | orchestrator | Sunday 23 March 2025 13:40:16 +0000 (0:00:01.040) 0:00:01.502 ********** 2025-03-23 13:46:39.510002 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-03-23 13:46:39.510061 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-03-23 13:46:39.510080 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-03-23 13:46:39.510095 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-03-23 13:46:39.510110 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-03-23 13:46:39.510125 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-03-23 13:46:39.510139 | orchestrator | 2025-03-23 13:46:39.510154 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-03-23 13:46:39.510169 | orchestrator | 2025-03-23 13:46:39.510184 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-23 13:46:39.510198 | orchestrator | Sunday 23 March 2025 13:40:17 +0000 (0:00:00.958) 0:00:02.460 ********** 2025-03-23 13:46:39.510214 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:46:39.510231 | orchestrator | 2025-03-23 13:46:39.510246 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-03-23 13:46:39.510261 | orchestrator | Sunday 23 March 2025 13:40:19 +0000 (0:00:01.556) 0:00:04.017 ********** 2025-03-23 13:46:39.510276 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:46:39.510291 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:46:39.510306 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:46:39.510321 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:46:39.510336 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:46:39.510351 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:46:39.510366 | orchestrator | 2025-03-23 13:46:39.510381 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-03-23 13:46:39.510395 | orchestrator | Sunday 23 March 2025 13:40:21 +0000 (0:00:01.841) 0:00:05.858 ********** 2025-03-23 13:46:39.510410 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:46:39.510425 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:46:39.510858 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:46:39.510885 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:46:39.510913 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:46:39.510927 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:46:39.510941 | orchestrator | 2025-03-23 13:46:39.510955 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-03-23 13:46:39.510969 | orchestrator | Sunday 23 March 2025 13:40:22 +0000 (0:00:01.331) 0:00:07.190 ********** 2025-03-23 13:46:39.510983 | orchestrator | ok: [testbed-node-0] => { 2025-03-23 13:46:39.510997 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511011 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511025 | orchestrator | } 2025-03-23 13:46:39.511039 | orchestrator | ok: [testbed-node-1] => { 2025-03-23 13:46:39.511053 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511067 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511080 | orchestrator | } 2025-03-23 13:46:39.511094 | orchestrator | ok: [testbed-node-2] => { 2025-03-23 13:46:39.511108 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511122 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511136 | orchestrator | } 2025-03-23 13:46:39.511150 | orchestrator | ok: [testbed-node-3] => { 2025-03-23 13:46:39.511163 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511177 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511191 | orchestrator | } 2025-03-23 13:46:39.511205 | orchestrator | ok: [testbed-node-4] => { 2025-03-23 13:46:39.511230 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511244 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511257 | orchestrator | } 2025-03-23 13:46:39.511271 | orchestrator | ok: [testbed-node-5] => { 2025-03-23 13:46:39.511285 | orchestrator |  "changed": false, 2025-03-23 13:46:39.511298 | orchestrator |  "msg": "All assertions passed" 2025-03-23 13:46:39.511312 | orchestrator | } 2025-03-23 13:46:39.511326 | orchestrator | 2025-03-23 13:46:39.511340 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-03-23 13:46:39.511354 | orchestrator | Sunday 23 March 2025 13:40:23 +0000 (0:00:00.849) 0:00:08.040 ********** 2025-03-23 13:46:39.511368 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.511382 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.511395 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.511409 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.511423 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.511437 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.511450 | orchestrator | 2025-03-23 13:46:39.511465 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-03-23 13:46:39.511480 | orchestrator | Sunday 23 March 2025 13:40:24 +0000 (0:00:01.058) 0:00:09.098 ********** 2025-03-23 13:46:39.511496 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-03-23 13:46:39.511511 | orchestrator | 2025-03-23 13:46:39.511526 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-03-23 13:46:39.511542 | orchestrator | Sunday 23 March 2025 13:40:28 +0000 (0:00:03.826) 0:00:12.924 ********** 2025-03-23 13:46:39.511558 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-03-23 13:46:39.511574 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-03-23 13:46:39.511590 | orchestrator | 2025-03-23 13:46:39.511636 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-03-23 13:46:39.511652 | orchestrator | Sunday 23 March 2025 13:40:35 +0000 (0:00:07.460) 0:00:20.385 ********** 2025-03-23 13:46:39.511668 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:46:39.511683 | orchestrator | 2025-03-23 13:46:39.511698 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-03-23 13:46:39.511714 | orchestrator | Sunday 23 March 2025 13:40:39 +0000 (0:00:04.009) 0:00:24.394 ********** 2025-03-23 13:46:39.511729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:46:39.511744 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-03-23 13:46:39.511765 | orchestrator | 2025-03-23 13:46:39.512271 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-03-23 13:46:39.512290 | orchestrator | Sunday 23 March 2025 13:40:43 +0000 (0:00:04.297) 0:00:28.692 ********** 2025-03-23 13:46:39.512304 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:46:39.512318 | orchestrator | 2025-03-23 13:46:39.512331 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-03-23 13:46:39.512345 | orchestrator | Sunday 23 March 2025 13:40:47 +0000 (0:00:03.832) 0:00:32.525 ********** 2025-03-23 13:46:39.512359 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-03-23 13:46:39.512373 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-03-23 13:46:39.512387 | orchestrator | 2025-03-23 13:46:39.512401 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-23 13:46:39.512414 | orchestrator | Sunday 23 March 2025 13:40:57 +0000 (0:00:09.568) 0:00:42.093 ********** 2025-03-23 13:46:39.512428 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.512448 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.512462 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.512476 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.512490 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.512514 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.512528 | orchestrator | 2025-03-23 13:46:39.512541 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-03-23 13:46:39.512555 | orchestrator | Sunday 23 March 2025 13:40:58 +0000 (0:00:00.774) 0:00:42.867 ********** 2025-03-23 13:46:39.512569 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.512583 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.512616 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.512631 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.512645 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.512659 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.512672 | orchestrator | 2025-03-23 13:46:39.512686 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-03-23 13:46:39.512700 | orchestrator | Sunday 23 March 2025 13:41:02 +0000 (0:00:04.844) 0:00:47.712 ********** 2025-03-23 13:46:39.512714 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:46:39.512728 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:46:39.512742 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:46:39.512755 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:46:39.512838 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:46:39.512909 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:46:39.512926 | orchestrator | 2025-03-23 13:46:39.512941 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-23 13:46:39.513319 | orchestrator | Sunday 23 March 2025 13:41:04 +0000 (0:00:01.335) 0:00:49.047 ********** 2025-03-23 13:46:39.513338 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.513352 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.513366 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.513380 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.513394 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.513408 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.513422 | orchestrator | 2025-03-23 13:46:39.513436 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-03-23 13:46:39.513450 | orchestrator | Sunday 23 March 2025 13:41:10 +0000 (0:00:05.873) 0:00:54.920 ********** 2025-03-23 13:46:39.513468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.513517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.513625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.513761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.513781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.513853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.513885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.514691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.514740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.514764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.514780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.514814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.514831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.515435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.515541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.515754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.515782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.515833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.515866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.515956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.515986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.516014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.516028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.516044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.516121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.516149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.518363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.518438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.518454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.518469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.518516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.518548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.518637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.518704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.518720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.518748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.518801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.518851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.518916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.518939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.518955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.518980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.518996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.519162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.519336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.519409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.519450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.519479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.519547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.519563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.519620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.519651 | orchestrator | 2025-03-23 13:46:39.519667 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-03-23 13:46:39.519681 | orchestrator | Sunday 23 March 2025 13:41:16 +0000 (0:00:05.856) 0:01:00.777 ********** 2025-03-23 13:46:39.519695 | orchestrator | [WARNING]: Skipped 2025-03-23 13:46:39.519710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-03-23 13:46:39.519724 | orchestrator | due to this access issue: 2025-03-23 13:46:39.519745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-03-23 13:46:39.519759 | orchestrator | a directory 2025-03-23 13:46:39.519773 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:46:39.519787 | orchestrator | 2025-03-23 13:46:39.519807 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-23 13:46:39.519821 | orchestrator | Sunday 23 March 2025 13:41:16 +0000 (0:00:00.912) 0:01:01.689 ********** 2025-03-23 13:46:39.519836 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:46:39.519850 | orchestrator | 2025-03-23 13:46:39.519864 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-03-23 13:46:39.519878 | orchestrator | Sunday 23 March 2025 13:41:18 +0000 (0:00:01.510) 0:01:03.200 ********** 2025-03-23 13:46:39.519892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.519924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.519940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.519955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.519976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.519990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.520011 | orchestrator | 2025-03-23 13:46:39.520025 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-03-23 13:46:39.520040 | orchestrator | Sunday 23 March 2025 13:41:22 +0000 (0:00:04.491) 0:01:07.691 ********** 2025-03-23 13:46:39.520064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520079 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.520094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520109 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.520129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520144 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.520158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520184 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.520209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520225 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.520239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520253 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.520267 | orchestrator | 2025-03-23 13:46:39.520281 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-03-23 13:46:39.520294 | orchestrator | Sunday 23 March 2025 13:41:27 +0000 (0:00:05.040) 0:01:12.732 ********** 2025-03-23 13:46:39.520309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520323 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.520345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520370 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.520395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520411 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.520425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520439 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.520454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520468 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.520496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.520510 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.520524 | orchestrator | 2025-03-23 13:46:39.520538 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-03-23 13:46:39.520553 | orchestrator | Sunday 23 March 2025 13:41:36 +0000 (0:00:08.121) 0:01:20.859 ********** 2025-03-23 13:46:39.520575 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.520612 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.520628 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.520642 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.520656 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.520670 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.520684 | orchestrator | 2025-03-23 13:46:39.520698 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-03-23 13:46:39.520712 | orchestrator | Sunday 23 March 2025 13:41:43 +0000 (0:00:07.282) 0:01:28.142 ********** 2025-03-23 13:46:39.520726 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.520740 | orchestrator | 2025-03-23 13:46:39.520754 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-03-23 13:46:39.520768 | orchestrator | Sunday 23 March 2025 13:41:43 +0000 (0:00:00.347) 0:01:28.490 ********** 2025-03-23 13:46:39.520782 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.520796 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.520809 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.520823 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.520837 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.520851 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.520864 | orchestrator | 2025-03-23 13:46:39.520883 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-03-23 13:46:39.520898 | orchestrator | Sunday 23 March 2025 13:41:45 +0000 (0:00:02.062) 0:01:30.552 ********** 2025-03-23 13:46:39.520912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.520939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.520955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.520983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.520999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.521013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.521142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.521204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521241 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.521256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.521270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.521354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.521479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.521545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521574 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.521613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.521629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.521714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.521841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.521855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.521903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.521918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521932 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.521953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.521969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.521993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.522083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.522189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.522223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.522285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.522300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522314 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.522329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.522350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.522432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.522531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.522560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.522581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.522648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.522948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.522972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.522987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523044 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.523066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.523134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.523194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523223 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.523236 | orchestrator | 2025-03-23 13:46:39.523251 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-03-23 13:46:39.523265 | orchestrator | Sunday 23 March 2025 13:41:53 +0000 (0:00:07.677) 0:01:38.229 ********** 2025-03-23 13:46:39.523279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.523294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.523366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.523451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.523500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.523567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.523671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.523831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.523920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.523935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.523955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.523976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.523991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.524020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.524035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.524076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.524090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.524161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.524291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.524327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.524377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.524426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.524490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.524563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.524579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.524610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.524652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.524776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.524797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.524826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.524862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.524876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.524913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.524927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.524963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.524977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.524998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.525013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.525050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.525065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525079 | orchestrator | 2025-03-23 13:46:39.525093 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-03-23 13:46:39.525107 | orchestrator | Sunday 23 March 2025 13:42:01 +0000 (0:00:07.535) 0:01:45.765 ********** 2025-03-23 13:46:39.525128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.525143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.525207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.525243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.525258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.525294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.525365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.525394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.525408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.525428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.526012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.526118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.526162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.526198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.526314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.526345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.526395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.526421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.526573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.526718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.526798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.526814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.526822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.526845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.526856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526864 | orchestrator | 2025-03-23 13:46:39.526872 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-03-23 13:46:39.526880 | orchestrator | Sunday 23 March 2025 13:42:12 +0000 (0:00:11.673) 0:01:57.438 ********** 2025-03-23 13:46:39.526888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.526911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.526932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.526985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.526997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.527072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.527215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527246 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.527255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.527304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.527311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527355 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.527363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.527386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.527401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.527452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.527508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.527629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.527682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.527706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.527738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.527843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.527896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.527947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527954 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.527963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.527971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.527978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.527991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.528005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528022 | orchestrator | 2025-03-23 13:46:39.528029 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-03-23 13:46:39.528037 | orchestrator | Sunday 23 March 2025 13:42:17 +0000 (0:00:05.149) 0:02:02.588 ********** 2025-03-23 13:46:39.528045 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.528052 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.528059 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:46:39.528066 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.528073 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:46:39.528080 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:46:39.528087 | orchestrator | 2025-03-23 13:46:39.528094 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-03-23 13:46:39.528101 | orchestrator | Sunday 23 March 2025 13:42:25 +0000 (0:00:07.594) 0:02:10.183 ********** 2025-03-23 13:46:39.528108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.528115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.528217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.528286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.528320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528335 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.528342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.528350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.528394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.528457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.528490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528505 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.528512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.528534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.528577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.528687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.528718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528737 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.528744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.528752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.528794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.528866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.528893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.528901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.528926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.528964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.528977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.528997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.529018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.529037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.529045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.529065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.529073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.529098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.529132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.529160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.529167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.529181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.529200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.529210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.529290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.529297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529304 | orchestrator | 2025-03-23 13:46:39.529310 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-03-23 13:46:39.529317 | orchestrator | Sunday 23 March 2025 13:42:32 +0000 (0:00:06.897) 0:02:17.081 ********** 2025-03-23 13:46:39.529324 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529330 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529337 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529343 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529350 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529356 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529363 | orchestrator | 2025-03-23 13:46:39.529369 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-03-23 13:46:39.529376 | orchestrator | Sunday 23 March 2025 13:42:35 +0000 (0:00:03.370) 0:02:20.452 ********** 2025-03-23 13:46:39.529383 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529390 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529403 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529410 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529420 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529427 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529434 | orchestrator | 2025-03-23 13:46:39.529440 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-03-23 13:46:39.529447 | orchestrator | Sunday 23 March 2025 13:42:38 +0000 (0:00:02.647) 0:02:23.099 ********** 2025-03-23 13:46:39.529453 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529460 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529467 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529473 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529480 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529486 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529493 | orchestrator | 2025-03-23 13:46:39.529535 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-03-23 13:46:39.529544 | orchestrator | Sunday 23 March 2025 13:42:43 +0000 (0:00:05.291) 0:02:28.390 ********** 2025-03-23 13:46:39.529551 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529557 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529563 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529570 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529576 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529583 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529589 | orchestrator | 2025-03-23 13:46:39.529608 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-03-23 13:46:39.529615 | orchestrator | Sunday 23 March 2025 13:42:47 +0000 (0:00:03.662) 0:02:32.053 ********** 2025-03-23 13:46:39.529622 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529628 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529635 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529641 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529648 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529654 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529661 | orchestrator | 2025-03-23 13:46:39.529667 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-03-23 13:46:39.529674 | orchestrator | Sunday 23 March 2025 13:42:50 +0000 (0:00:03.435) 0:02:35.489 ********** 2025-03-23 13:46:39.529680 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529687 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529693 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529700 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529706 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529713 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529719 | orchestrator | 2025-03-23 13:46:39.529726 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-03-23 13:46:39.529733 | orchestrator | Sunday 23 March 2025 13:42:54 +0000 (0:00:03.659) 0:02:39.149 ********** 2025-03-23 13:46:39.529739 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529746 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.529753 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529759 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.529768 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529775 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.529782 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529788 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.529795 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529802 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.529808 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-23 13:46:39.529818 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.529825 | orchestrator | 2025-03-23 13:46:39.529831 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-03-23 13:46:39.529838 | orchestrator | Sunday 23 March 2025 13:42:57 +0000 (0:00:03.171) 0:02:42.320 ********** 2025-03-23 13:46:39.529866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.529874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.529953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.529961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.529968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.530035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.530070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.530151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.530208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.530240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530394 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.530401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.530413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.530420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.530499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530593 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.530619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.530626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.530640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.530727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.530734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530821 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.530835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.530895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.530909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.530923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.530961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.530975 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.530981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.530994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.531057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.531159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.531186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531202 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.531256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.531268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.531333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.531436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.531466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531482 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.531488 | orchestrator | 2025-03-23 13:46:39.531495 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-03-23 13:46:39.531501 | orchestrator | Sunday 23 March 2025 13:43:03 +0000 (0:00:05.852) 0:02:48.173 ********** 2025-03-23 13:46:39.531540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.531555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.531605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.531732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.531762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.531772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.531823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.531833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531859 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.531865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.531920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.531952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.531958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.532027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.532090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.532132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532207 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.532214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.532237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.532300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532317 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.532324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.532330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.532411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.532498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.532517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.532699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.532710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532754 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.532800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.532827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.532883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.532896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532903 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.532909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.532915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.532977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.532983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.532990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.533002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.533033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.533047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.533054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.533082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.533106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533114 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533120 | orchestrator | 2025-03-23 13:46:39.533126 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-03-23 13:46:39.533132 | orchestrator | Sunday 23 March 2025 13:43:06 +0000 (0:00:02.996) 0:02:51.169 ********** 2025-03-23 13:46:39.533138 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533144 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533149 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533155 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533164 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533170 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533176 | orchestrator | 2025-03-23 13:46:39.533184 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-03-23 13:46:39.533190 | orchestrator | Sunday 23 March 2025 13:43:08 +0000 (0:00:02.279) 0:02:53.449 ********** 2025-03-23 13:46:39.533196 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533202 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533210 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533216 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:46:39.533229 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:46:39.533235 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:46:39.533241 | orchestrator | 2025-03-23 13:46:39.533247 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-03-23 13:46:39.533253 | orchestrator | Sunday 23 March 2025 13:43:19 +0000 (0:00:11.118) 0:03:04.567 ********** 2025-03-23 13:46:39.533259 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533264 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533270 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533276 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533282 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533288 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533294 | orchestrator | 2025-03-23 13:46:39.533300 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-03-23 13:46:39.533306 | orchestrator | Sunday 23 March 2025 13:43:23 +0000 (0:00:03.203) 0:03:07.770 ********** 2025-03-23 13:46:39.533311 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533317 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533323 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533329 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533335 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533341 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533346 | orchestrator | 2025-03-23 13:46:39.533352 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-03-23 13:46:39.533358 | orchestrator | Sunday 23 March 2025 13:43:27 +0000 (0:00:04.306) 0:03:12.077 ********** 2025-03-23 13:46:39.533364 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533370 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533376 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533382 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533388 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533398 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533403 | orchestrator | 2025-03-23 13:46:39.533409 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-03-23 13:46:39.533415 | orchestrator | Sunday 23 March 2025 13:43:30 +0000 (0:00:03.248) 0:03:15.325 ********** 2025-03-23 13:46:39.533421 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533427 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533433 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533439 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533444 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533450 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533456 | orchestrator | 2025-03-23 13:46:39.533462 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-03-23 13:46:39.533468 | orchestrator | Sunday 23 March 2025 13:43:34 +0000 (0:00:03.536) 0:03:18.862 ********** 2025-03-23 13:46:39.533473 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533479 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533485 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533491 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533497 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533503 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533508 | orchestrator | 2025-03-23 13:46:39.533514 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-03-23 13:46:39.533520 | orchestrator | Sunday 23 March 2025 13:43:40 +0000 (0:00:05.918) 0:03:24.780 ********** 2025-03-23 13:46:39.533527 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533533 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533540 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533546 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533553 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533559 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533565 | orchestrator | 2025-03-23 13:46:39.533572 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-03-23 13:46:39.533579 | orchestrator | Sunday 23 March 2025 13:43:52 +0000 (0:00:12.805) 0:03:37.586 ********** 2025-03-23 13:46:39.533585 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533592 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533612 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533618 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533625 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533646 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533653 | orchestrator | 2025-03-23 13:46:39.533660 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-03-23 13:46:39.533666 | orchestrator | Sunday 23 March 2025 13:43:57 +0000 (0:00:04.903) 0:03:42.490 ********** 2025-03-23 13:46:39.533673 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533682 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533689 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533696 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533702 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533709 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533715 | orchestrator | 2025-03-23 13:46:39.533722 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-03-23 13:46:39.533728 | orchestrator | Sunday 23 March 2025 13:44:02 +0000 (0:00:05.154) 0:03:47.644 ********** 2025-03-23 13:46:39.533735 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533741 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.533748 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533755 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.533761 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533772 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.533778 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533785 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.533791 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533798 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.533804 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-23 13:46:39.533811 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.533817 | orchestrator | 2025-03-23 13:46:39.533824 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-03-23 13:46:39.533830 | orchestrator | Sunday 23 March 2025 13:44:13 +0000 (0:00:10.276) 0:03:57.920 ********** 2025-03-23 13:46:39.533837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.533844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.533895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.533908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.533914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.533952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.533964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.533970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.533977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534036 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.534043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.534049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.534101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.534167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534223 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.534229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.534235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.534252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.534327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.534337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.534455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.534475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.534531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.534585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534639 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.534660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.534695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534762 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.534768 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.534788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.534795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.534829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.534897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.534917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.534936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.534946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.534952 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.534958 | orchestrator | 2025-03-23 13:46:39.534964 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-03-23 13:46:39.534970 | orchestrator | Sunday 23 March 2025 13:44:23 +0000 (0:00:10.412) 0:04:08.333 ********** 2025-03-23 13:46:39.534976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.534997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.535072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.535144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.535213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 13:46:39.535450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.535481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 13:46:39.535500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 13:46:39.535588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.535772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-23 13:46:39.535835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:46:39.535849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:46:39.535858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 13:46:39.535870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 13:46:39.535879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 13:46:39.535884 | orchestrator | 2025-03-23 13:46:39.535890 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-23 13:46:39.535895 | orchestrator | Sunday 23 March 2025 13:44:31 +0000 (0:00:08.308) 0:04:16.642 ********** 2025-03-23 13:46:39.535901 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:46:39.535906 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:46:39.535913 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:46:39.535919 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:46:39.535924 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:46:39.535929 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:46:39.535934 | orchestrator | 2025-03-23 13:46:39.535940 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-03-23 13:46:39.535945 | orchestrator | Sunday 23 March 2025 13:44:34 +0000 (0:00:02.423) 0:04:19.065 ********** 2025-03-23 13:46:39.535950 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:46:39.535956 | orchestrator | 2025-03-23 13:46:39.535961 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-03-23 13:46:39.535966 | orchestrator | Sunday 23 March 2025 13:44:38 +0000 (0:00:04.169) 0:04:23.234 ********** 2025-03-23 13:46:39.535972 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:46:39.535977 | orchestrator | 2025-03-23 13:46:39.535982 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-03-23 13:46:39.535988 | orchestrator | Sunday 23 March 2025 13:44:41 +0000 (0:00:02.623) 0:04:25.858 ********** 2025-03-23 13:46:39.535993 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:46:39.535998 | orchestrator | 2025-03-23 13:46:39.536006 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536011 | orchestrator | Sunday 23 March 2025 13:45:20 +0000 (0:00:39.723) 0:05:05.582 ********** 2025-03-23 13:46:39.536017 | orchestrator | 2025-03-23 13:46:39.536022 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536027 | orchestrator | Sunday 23 March 2025 13:45:20 +0000 (0:00:00.065) 0:05:05.647 ********** 2025-03-23 13:46:39.536032 | orchestrator | 2025-03-23 13:46:39.536038 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536043 | orchestrator | Sunday 23 March 2025 13:45:21 +0000 (0:00:00.275) 0:05:05.923 ********** 2025-03-23 13:46:39.536048 | orchestrator | 2025-03-23 13:46:39.536054 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536059 | orchestrator | Sunday 23 March 2025 13:45:21 +0000 (0:00:00.073) 0:05:05.996 ********** 2025-03-23 13:46:39.536064 | orchestrator | 2025-03-23 13:46:39.536069 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536075 | orchestrator | Sunday 23 March 2025 13:45:21 +0000 (0:00:00.060) 0:05:06.057 ********** 2025-03-23 13:46:39.536080 | orchestrator | 2025-03-23 13:46:39.536085 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-23 13:46:39.536094 | orchestrator | Sunday 23 March 2025 13:45:21 +0000 (0:00:00.057) 0:05:06.115 ********** 2025-03-23 13:46:39.536099 | orchestrator | 2025-03-23 13:46:39.536104 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-03-23 13:46:39.536110 | orchestrator | Sunday 23 March 2025 13:45:21 +0000 (0:00:00.307) 0:05:06.422 ********** 2025-03-23 13:46:39.536115 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:46:39.536120 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:46:39.536125 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:46:39.536131 | orchestrator | 2025-03-23 13:46:39.536136 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-03-23 13:46:39.536141 | orchestrator | Sunday 23 March 2025 13:45:46 +0000 (0:00:24.436) 0:05:30.859 ********** 2025-03-23 13:46:39.536146 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:46:39.536152 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:46:39.536157 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:46:39.536162 | orchestrator | 2025-03-23 13:46:39.536167 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:46:39.536173 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-23 13:46:39.536179 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-03-23 13:46:39.536184 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-03-23 13:46:39.536192 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-23 13:46:39.536198 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-23 13:46:39.536203 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-23 13:46:39.536208 | orchestrator | 2025-03-23 13:46:39.536214 | orchestrator | 2025-03-23 13:46:39.536219 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:46:39.536224 | orchestrator | Sunday 23 March 2025 13:46:37 +0000 (0:00:51.277) 0:06:22.137 ********** 2025-03-23 13:46:39.536230 | orchestrator | =============================================================================== 2025-03-23 13:46:39.536235 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.28s 2025-03-23 13:46:39.536240 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.72s 2025-03-23 13:46:39.536245 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.44s 2025-03-23 13:46:39.536251 | orchestrator | neutron : Copying over nsx.ini ----------------------------------------- 12.81s 2025-03-23 13:46:39.536256 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 11.67s 2025-03-23 13:46:39.536261 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------ 11.12s 2025-03-23 13:46:39.536268 | orchestrator | neutron : Copying over neutron_taas.conf ------------------------------- 10.41s 2025-03-23 13:46:42.570399 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg --------------------------- 10.28s 2025-03-23 13:46:42.570503 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.57s 2025-03-23 13:46:42.570520 | orchestrator | neutron : Check neutron containers -------------------------------------- 8.31s 2025-03-23 13:46:42.570534 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 8.13s 2025-03-23 13:46:42.570566 | orchestrator | neutron : Copying over existing policy file ----------------------------- 7.68s 2025-03-23 13:46:42.570581 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 7.60s 2025-03-23 13:46:42.570675 | orchestrator | neutron : Copying over config.json files for services ------------------- 7.54s 2025-03-23 13:46:42.570691 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.46s 2025-03-23 13:46:42.570705 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 7.28s 2025-03-23 13:46:42.570719 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.90s 2025-03-23 13:46:42.570733 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 5.92s 2025-03-23 13:46:42.570747 | orchestrator | Setting sysctl values --------------------------------------------------- 5.87s 2025-03-23 13:46:42.570761 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.86s 2025-03-23 13:46:42.570776 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task 716ce177-c311-4d5a-aebe-a1e65e65e99b is in state SUCCESS 2025-03-23 13:46:42.570791 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:42.570805 | orchestrator | 2025-03-23 13:46:39 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:42.570819 | orchestrator | 2025-03-23 13:46:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:42.570850 | orchestrator | 2025-03-23 13:46:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:42.571479 | orchestrator | 2025-03-23 13:46:42 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:42.574488 | orchestrator | 2025-03-23 13:46:42 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:42.576226 | orchestrator | 2025-03-23 13:46:42 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:42.576260 | orchestrator | 2025-03-23 13:46:42 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:45.630471 | orchestrator | 2025-03-23 13:46:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:45.630651 | orchestrator | 2025-03-23 13:46:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:45.631008 | orchestrator | 2025-03-23 13:46:45 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:45.631975 | orchestrator | 2025-03-23 13:46:45 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:45.632824 | orchestrator | 2025-03-23 13:46:45 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:45.633662 | orchestrator | 2025-03-23 13:46:45 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:48.684060 | orchestrator | 2025-03-23 13:46:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:48.684188 | orchestrator | 2025-03-23 13:46:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:48.685134 | orchestrator | 2025-03-23 13:46:48 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:48.686122 | orchestrator | 2025-03-23 13:46:48 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:48.688202 | orchestrator | 2025-03-23 13:46:48 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:48.689274 | orchestrator | 2025-03-23 13:46:48 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:51.747253 | orchestrator | 2025-03-23 13:46:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:51.747397 | orchestrator | 2025-03-23 13:46:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:51.750223 | orchestrator | 2025-03-23 13:46:51 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:51.751036 | orchestrator | 2025-03-23 13:46:51 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:51.751953 | orchestrator | 2025-03-23 13:46:51 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:51.753010 | orchestrator | 2025-03-23 13:46:51 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:54.796576 | orchestrator | 2025-03-23 13:46:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:54.796870 | orchestrator | 2025-03-23 13:46:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:54.797923 | orchestrator | 2025-03-23 13:46:54 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:54.797958 | orchestrator | 2025-03-23 13:46:54 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:54.798306 | orchestrator | 2025-03-23 13:46:54 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:54.799643 | orchestrator | 2025-03-23 13:46:54 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:46:57.842769 | orchestrator | 2025-03-23 13:46:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:46:57.842895 | orchestrator | 2025-03-23 13:46:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:46:57.844111 | orchestrator | 2025-03-23 13:46:57 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:46:57.844915 | orchestrator | 2025-03-23 13:46:57 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:46:57.845717 | orchestrator | 2025-03-23 13:46:57 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:46:57.847633 | orchestrator | 2025-03-23 13:46:57 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:00.879986 | orchestrator | 2025-03-23 13:46:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:00.880110 | orchestrator | 2025-03-23 13:47:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:00.880732 | orchestrator | 2025-03-23 13:47:00 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:00.880767 | orchestrator | 2025-03-23 13:47:00 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:00.888763 | orchestrator | 2025-03-23 13:47:00 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:00.889841 | orchestrator | 2025-03-23 13:47:00 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:03.918506 | orchestrator | 2025-03-23 13:47:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:03.918793 | orchestrator | 2025-03-23 13:47:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:03.919311 | orchestrator | 2025-03-23 13:47:03 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:03.919346 | orchestrator | 2025-03-23 13:47:03 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:03.919967 | orchestrator | 2025-03-23 13:47:03 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:03.920518 | orchestrator | 2025-03-23 13:47:03 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:06.955954 | orchestrator | 2025-03-23 13:47:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:06.956067 | orchestrator | 2025-03-23 13:47:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:06.956625 | orchestrator | 2025-03-23 13:47:06 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:06.956653 | orchestrator | 2025-03-23 13:47:06 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:06.957286 | orchestrator | 2025-03-23 13:47:06 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:06.957824 | orchestrator | 2025-03-23 13:47:06 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:10.000404 | orchestrator | 2025-03-23 13:47:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:10.000540 | orchestrator | 2025-03-23 13:47:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:10.002858 | orchestrator | 2025-03-23 13:47:10 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:10.003897 | orchestrator | 2025-03-23 13:47:10 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:10.003924 | orchestrator | 2025-03-23 13:47:10 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:10.004650 | orchestrator | 2025-03-23 13:47:10 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:10.004807 | orchestrator | 2025-03-23 13:47:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:13.049124 | orchestrator | 2025-03-23 13:47:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:13.050158 | orchestrator | 2025-03-23 13:47:13 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:13.053131 | orchestrator | 2025-03-23 13:47:13 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:13.053401 | orchestrator | 2025-03-23 13:47:13 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:13.053433 | orchestrator | 2025-03-23 13:47:13 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:13.053560 | orchestrator | 2025-03-23 13:47:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:16.092242 | orchestrator | 2025-03-23 13:47:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:16.093306 | orchestrator | 2025-03-23 13:47:16 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:16.094644 | orchestrator | 2025-03-23 13:47:16 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:16.099762 | orchestrator | 2025-03-23 13:47:16 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:16.101478 | orchestrator | 2025-03-23 13:47:16 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:19.155543 | orchestrator | 2025-03-23 13:47:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:19.155691 | orchestrator | 2025-03-23 13:47:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:19.159499 | orchestrator | 2025-03-23 13:47:19 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:19.161365 | orchestrator | 2025-03-23 13:47:19 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:19.162522 | orchestrator | 2025-03-23 13:47:19 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:19.163402 | orchestrator | 2025-03-23 13:47:19 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:22.202276 | orchestrator | 2025-03-23 13:47:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:22.202387 | orchestrator | 2025-03-23 13:47:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:22.203327 | orchestrator | 2025-03-23 13:47:22 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:22.206084 | orchestrator | 2025-03-23 13:47:22 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:22.207249 | orchestrator | 2025-03-23 13:47:22 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:22.208781 | orchestrator | 2025-03-23 13:47:22 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:25.250947 | orchestrator | 2025-03-23 13:47:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:25.251081 | orchestrator | 2025-03-23 13:47:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:25.251679 | orchestrator | 2025-03-23 13:47:25 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:25.252640 | orchestrator | 2025-03-23 13:47:25 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:25.256969 | orchestrator | 2025-03-23 13:47:25 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:25.258346 | orchestrator | 2025-03-23 13:47:25 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:28.311986 | orchestrator | 2025-03-23 13:47:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:28.312122 | orchestrator | 2025-03-23 13:47:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:28.312566 | orchestrator | 2025-03-23 13:47:28 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:28.313510 | orchestrator | 2025-03-23 13:47:28 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:28.314679 | orchestrator | 2025-03-23 13:47:28 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:28.315642 | orchestrator | 2025-03-23 13:47:28 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:31.353842 | orchestrator | 2025-03-23 13:47:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:31.353952 | orchestrator | 2025-03-23 13:47:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:31.354520 | orchestrator | 2025-03-23 13:47:31 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:31.356500 | orchestrator | 2025-03-23 13:47:31 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:31.357356 | orchestrator | 2025-03-23 13:47:31 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:31.358287 | orchestrator | 2025-03-23 13:47:31 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:34.407065 | orchestrator | 2025-03-23 13:47:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:34.407206 | orchestrator | 2025-03-23 13:47:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:34.411505 | orchestrator | 2025-03-23 13:47:34 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:34.411616 | orchestrator | 2025-03-23 13:47:34 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:37.452486 | orchestrator | 2025-03-23 13:47:34 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:37.452653 | orchestrator | 2025-03-23 13:47:34 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:37.452798 | orchestrator | 2025-03-23 13:47:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:37.452840 | orchestrator | 2025-03-23 13:47:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:37.453530 | orchestrator | 2025-03-23 13:47:37 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:37.453563 | orchestrator | 2025-03-23 13:47:37 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:37.454308 | orchestrator | 2025-03-23 13:47:37 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:37.455099 | orchestrator | 2025-03-23 13:47:37 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:40.517324 | orchestrator | 2025-03-23 13:47:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:40.517465 | orchestrator | 2025-03-23 13:47:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:40.521861 | orchestrator | 2025-03-23 13:47:40 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:40.524089 | orchestrator | 2025-03-23 13:47:40 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:40.525748 | orchestrator | 2025-03-23 13:47:40 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:40.529156 | orchestrator | 2025-03-23 13:47:40 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:43.577391 | orchestrator | 2025-03-23 13:47:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:43.577529 | orchestrator | 2025-03-23 13:47:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:43.578937 | orchestrator | 2025-03-23 13:47:43 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:43.580468 | orchestrator | 2025-03-23 13:47:43 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:43.581237 | orchestrator | 2025-03-23 13:47:43 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:43.582383 | orchestrator | 2025-03-23 13:47:43 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:46.633516 | orchestrator | 2025-03-23 13:47:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:46.633681 | orchestrator | 2025-03-23 13:47:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:46.633913 | orchestrator | 2025-03-23 13:47:46 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:46.635603 | orchestrator | 2025-03-23 13:47:46 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:46.636915 | orchestrator | 2025-03-23 13:47:46 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:46.637708 | orchestrator | 2025-03-23 13:47:46 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:49.682465 | orchestrator | 2025-03-23 13:47:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:49.682758 | orchestrator | 2025-03-23 13:47:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:49.683463 | orchestrator | 2025-03-23 13:47:49 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:49.683496 | orchestrator | 2025-03-23 13:47:49 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:49.683943 | orchestrator | 2025-03-23 13:47:49 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:49.684792 | orchestrator | 2025-03-23 13:47:49 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:52.718246 | orchestrator | 2025-03-23 13:47:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:52.718387 | orchestrator | 2025-03-23 13:47:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:52.719176 | orchestrator | 2025-03-23 13:47:52 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:52.720048 | orchestrator | 2025-03-23 13:47:52 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:52.720079 | orchestrator | 2025-03-23 13:47:52 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:52.720786 | orchestrator | 2025-03-23 13:47:52 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:55.746231 | orchestrator | 2025-03-23 13:47:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:55.746355 | orchestrator | 2025-03-23 13:47:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:55.746870 | orchestrator | 2025-03-23 13:47:55 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:55.747704 | orchestrator | 2025-03-23 13:47:55 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:55.748955 | orchestrator | 2025-03-23 13:47:55 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:55.749832 | orchestrator | 2025-03-23 13:47:55 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:58.791713 | orchestrator | 2025-03-23 13:47:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:47:58.791844 | orchestrator | 2025-03-23 13:47:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:47:58.794524 | orchestrator | 2025-03-23 13:47:58 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:47:58.797238 | orchestrator | 2025-03-23 13:47:58 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:47:58.799761 | orchestrator | 2025-03-23 13:47:58 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:47:58.801941 | orchestrator | 2025-03-23 13:47:58 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:47:58.802622 | orchestrator | 2025-03-23 13:47:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:01.852722 | orchestrator | 2025-03-23 13:48:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:01.853387 | orchestrator | 2025-03-23 13:48:01 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:01.854426 | orchestrator | 2025-03-23 13:48:01 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:01.855263 | orchestrator | 2025-03-23 13:48:01 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:01.857595 | orchestrator | 2025-03-23 13:48:01 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:04.935281 | orchestrator | 2025-03-23 13:48:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:04.935403 | orchestrator | 2025-03-23 13:48:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:04.938159 | orchestrator | 2025-03-23 13:48:04 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:04.940150 | orchestrator | 2025-03-23 13:48:04 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:04.944087 | orchestrator | 2025-03-23 13:48:04 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:04.945000 | orchestrator | 2025-03-23 13:48:04 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:07.987310 | orchestrator | 2025-03-23 13:48:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:07.987528 | orchestrator | 2025-03-23 13:48:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:07.988390 | orchestrator | 2025-03-23 13:48:07 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:07.988426 | orchestrator | 2025-03-23 13:48:07 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:07.989139 | orchestrator | 2025-03-23 13:48:07 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:07.990589 | orchestrator | 2025-03-23 13:48:07 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:11.062995 | orchestrator | 2025-03-23 13:48:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:11.063124 | orchestrator | 2025-03-23 13:48:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:11.064449 | orchestrator | 2025-03-23 13:48:11 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:11.064483 | orchestrator | 2025-03-23 13:48:11 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:11.065759 | orchestrator | 2025-03-23 13:48:11 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:11.067747 | orchestrator | 2025-03-23 13:48:11 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:14.134163 | orchestrator | 2025-03-23 13:48:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:14.134295 | orchestrator | 2025-03-23 13:48:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:14.142241 | orchestrator | 2025-03-23 13:48:14 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:14.145373 | orchestrator | 2025-03-23 13:48:14 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:14.145701 | orchestrator | 2025-03-23 13:48:14 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:14.150647 | orchestrator | 2025-03-23 13:48:14 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:17.189387 | orchestrator | 2025-03-23 13:48:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:17.189520 | orchestrator | 2025-03-23 13:48:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:17.193723 | orchestrator | 2025-03-23 13:48:17 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:17.195650 | orchestrator | 2025-03-23 13:48:17 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:17.195680 | orchestrator | 2025-03-23 13:48:17 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:17.195700 | orchestrator | 2025-03-23 13:48:17 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:20.249377 | orchestrator | 2025-03-23 13:48:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:20.249500 | orchestrator | 2025-03-23 13:48:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:20.253333 | orchestrator | 2025-03-23 13:48:20 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:20.254097 | orchestrator | 2025-03-23 13:48:20 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:20.254145 | orchestrator | 2025-03-23 13:48:20 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:20.256067 | orchestrator | 2025-03-23 13:48:20 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:23.307866 | orchestrator | 2025-03-23 13:48:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:23.308002 | orchestrator | 2025-03-23 13:48:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:23.308137 | orchestrator | 2025-03-23 13:48:23 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:23.309697 | orchestrator | 2025-03-23 13:48:23 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:23.311167 | orchestrator | 2025-03-23 13:48:23 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:23.316079 | orchestrator | 2025-03-23 13:48:23 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:26.391848 | orchestrator | 2025-03-23 13:48:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:26.391977 | orchestrator | 2025-03-23 13:48:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:26.392446 | orchestrator | 2025-03-23 13:48:26 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:26.397086 | orchestrator | 2025-03-23 13:48:26 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:26.400771 | orchestrator | 2025-03-23 13:48:26 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:26.402504 | orchestrator | 2025-03-23 13:48:26 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:29.453458 | orchestrator | 2025-03-23 13:48:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:29.453622 | orchestrator | 2025-03-23 13:48:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:29.457456 | orchestrator | 2025-03-23 13:48:29 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:29.458700 | orchestrator | 2025-03-23 13:48:29 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:29.459541 | orchestrator | 2025-03-23 13:48:29 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:29.462121 | orchestrator | 2025-03-23 13:48:29 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:32.513063 | orchestrator | 2025-03-23 13:48:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:32.513194 | orchestrator | 2025-03-23 13:48:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:32.513996 | orchestrator | 2025-03-23 13:48:32 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:32.514472 | orchestrator | 2025-03-23 13:48:32 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:32.515005 | orchestrator | 2025-03-23 13:48:32 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:32.516302 | orchestrator | 2025-03-23 13:48:32 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:35.557930 | orchestrator | 2025-03-23 13:48:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:35.558110 | orchestrator | 2025-03-23 13:48:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:35.558192 | orchestrator | 2025-03-23 13:48:35 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:35.558626 | orchestrator | 2025-03-23 13:48:35 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:35.560603 | orchestrator | 2025-03-23 13:48:35 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:35.562523 | orchestrator | 2025-03-23 13:48:35 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:35.564875 | orchestrator | 2025-03-23 13:48:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:38.613008 | orchestrator | 2025-03-23 13:48:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:38.614596 | orchestrator | 2025-03-23 13:48:38 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:38.617084 | orchestrator | 2025-03-23 13:48:38 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:38.619678 | orchestrator | 2025-03-23 13:48:38 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:38.622462 | orchestrator | 2025-03-23 13:48:38 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:41.678880 | orchestrator | 2025-03-23 13:48:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:41.679014 | orchestrator | 2025-03-23 13:48:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:41.679764 | orchestrator | 2025-03-23 13:48:41 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:41.679794 | orchestrator | 2025-03-23 13:48:41 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:41.679815 | orchestrator | 2025-03-23 13:48:41 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:41.681234 | orchestrator | 2025-03-23 13:48:41 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:44.735897 | orchestrator | 2025-03-23 13:48:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:44.736026 | orchestrator | 2025-03-23 13:48:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:44.737857 | orchestrator | 2025-03-23 13:48:44 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:44.740030 | orchestrator | 2025-03-23 13:48:44 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:44.742534 | orchestrator | 2025-03-23 13:48:44 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:44.742879 | orchestrator | 2025-03-23 13:48:44 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:47.805919 | orchestrator | 2025-03-23 13:48:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:47.806082 | orchestrator | 2025-03-23 13:48:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:50.861019 | orchestrator | 2025-03-23 13:48:47 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:50.861136 | orchestrator | 2025-03-23 13:48:47 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:50.861156 | orchestrator | 2025-03-23 13:48:47 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:50.861172 | orchestrator | 2025-03-23 13:48:47 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:50.861187 | orchestrator | 2025-03-23 13:48:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:50.861220 | orchestrator | 2025-03-23 13:48:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:50.862466 | orchestrator | 2025-03-23 13:48:50 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:50.865022 | orchestrator | 2025-03-23 13:48:50 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:50.865986 | orchestrator | 2025-03-23 13:48:50 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:50.868063 | orchestrator | 2025-03-23 13:48:50 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:53.932762 | orchestrator | 2025-03-23 13:48:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:53.932897 | orchestrator | 2025-03-23 13:48:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:53.933460 | orchestrator | 2025-03-23 13:48:53 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:53.933495 | orchestrator | 2025-03-23 13:48:53 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:53.934642 | orchestrator | 2025-03-23 13:48:53 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:53.935699 | orchestrator | 2025-03-23 13:48:53 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:48:56.969174 | orchestrator | 2025-03-23 13:48:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:48:56.969288 | orchestrator | 2025-03-23 13:48:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:48:56.970005 | orchestrator | 2025-03-23 13:48:56 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:48:56.972404 | orchestrator | 2025-03-23 13:48:56 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:48:56.973586 | orchestrator | 2025-03-23 13:48:56 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:48:56.974572 | orchestrator | 2025-03-23 13:48:56 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:00.031756 | orchestrator | 2025-03-23 13:48:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:00.031851 | orchestrator | 2025-03-23 13:49:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:00.032921 | orchestrator | 2025-03-23 13:49:00 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:49:00.033487 | orchestrator | 2025-03-23 13:49:00 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:00.034675 | orchestrator | 2025-03-23 13:49:00 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:00.036421 | orchestrator | 2025-03-23 13:49:00 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:00.036498 | orchestrator | 2025-03-23 13:49:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:03.075045 | orchestrator | 2025-03-23 13:49:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:03.075634 | orchestrator | 2025-03-23 13:49:03 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state STARTED 2025-03-23 13:49:03.076322 | orchestrator | 2025-03-23 13:49:03 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:03.077022 | orchestrator | 2025-03-23 13:49:03 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:03.077853 | orchestrator | 2025-03-23 13:49:03 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:06.118277 | orchestrator | 2025-03-23 13:49:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:06.118382 | orchestrator | 2025-03-23 13:49:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:06.120470 | orchestrator | 2025-03-23 13:49:06 | INFO  | Task db570ec5-7bb0-4b8e-ab43-03e77edb37a0 is in state SUCCESS 2025-03-23 13:49:06.122900 | orchestrator | 2025-03-23 13:49:06.122941 | orchestrator | 2025-03-23 13:49:06.122954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:49:06.122968 | orchestrator | 2025-03-23 13:49:06.123014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:49:06.123044 | orchestrator | Sunday 23 March 2025 13:43:08 +0000 (0:00:00.433) 0:00:00.433 ********** 2025-03-23 13:49:06.123058 | orchestrator | ok: [testbed-manager] 2025-03-23 13:49:06.123182 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:49:06.123196 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:49:06.123209 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:49:06.123223 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:49:06.123236 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:49:06.123249 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:49:06.123262 | orchestrator | 2025-03-23 13:49:06.123275 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:49:06.123289 | orchestrator | Sunday 23 March 2025 13:43:09 +0000 (0:00:00.848) 0:00:01.282 ********** 2025-03-23 13:49:06.123302 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123316 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123329 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123342 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123355 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123368 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123383 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-03-23 13:49:06.123396 | orchestrator | 2025-03-23 13:49:06.123410 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-03-23 13:49:06.123423 | orchestrator | 2025-03-23 13:49:06.123436 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-03-23 13:49:06.123449 | orchestrator | Sunday 23 March 2025 13:43:11 +0000 (0:00:02.845) 0:00:04.127 ********** 2025-03-23 13:49:06.123463 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:49:06.123477 | orchestrator | 2025-03-23 13:49:06.123491 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-03-23 13:49:06.123551 | orchestrator | Sunday 23 March 2025 13:43:16 +0000 (0:00:05.081) 0:00:09.208 ********** 2025-03-23 13:49:06.123571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.123651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:49:06.123689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.123719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.123734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.123748 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.123773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.123800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.123816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.123838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.123853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.123868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.123888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.123903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.123916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.123940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.123954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.123974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.123988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.124000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.124033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.124076 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:49:06.124090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.124104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.124123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.124161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.124180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.124194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.124249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.124263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.124282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.124296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.124316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.124353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.124386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.124889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.124968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.124982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.124995 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.125022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.125267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.125295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.125317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.125405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.125462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.125483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.125598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.125677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.125709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.125816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.125844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.125870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.125924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.125936 | orchestrator | 2025-03-23 13:49:06.125950 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-03-23 13:49:06.125962 | orchestrator | Sunday 23 March 2025 13:43:22 +0000 (0:00:05.171) 0:00:14.380 ********** 2025-03-23 13:49:06.125975 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:49:06.125988 | orchestrator | 2025-03-23 13:49:06.126000 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-03-23 13:49:06.126424 | orchestrator | Sunday 23 March 2025 13:43:25 +0000 (0:00:03.044) 0:00:17.424 ********** 2025-03-23 13:49:06.126447 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:49:06.126462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.126476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.126498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.126593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.126999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.127026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.127040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127054 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.127068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127211 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.127403 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:49:06.127418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.127432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.128099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.128118 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.128139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.128151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.128161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.128172 | orchestrator | 2025-03-23 13:49:06.128182 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-03-23 13:49:06.128193 | orchestrator | Sunday 23 March 2025 13:43:33 +0000 (0:00:08.021) 0:00:25.446 ********** 2025-03-23 13:49:06.128204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.128215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.128270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128292 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.128303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128351 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.128368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128444 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.128463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128616 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.128627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128666 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.128677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128721 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.128738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128779 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.128790 | orchestrator | 2025-03-23 13:49:06.128802 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-03-23 13:49:06.128814 | orchestrator | Sunday 23 March 2025 13:43:38 +0000 (0:00:04.857) 0:00:30.303 ********** 2025-03-23 13:49:06.128825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.128837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.128893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128904 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.128916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.128936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.128981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.128992 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.129004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.129020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129073 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.129093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.129104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129157 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.129167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.129178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.129196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 13:49:06.129208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129239 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.129260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129295 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.129306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.129317 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.129327 | orchestrator | 2025-03-23 13:49:06.129337 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-03-23 13:49:06.129347 | orchestrator | Sunday 23 March 2025 13:43:45 +0000 (0:00:07.901) 0:00:38.205 ********** 2025-03-23 13:49:06.129358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:49:06.129410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.129525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129551 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.129562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.129573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.129667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.129691 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:49:06.129709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129720 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.129730 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.129769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.129791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.129802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.129813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.129824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.129847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.129884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.129906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.129924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.129939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.129955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.130132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.130151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130185 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.130203 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.130214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.130367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.130388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.130399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.130414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.130425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.130489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.130501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.130520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.130579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.130631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.130663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.130714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.130735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.130793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.130804 | orchestrator | 2025-03-23 13:49:06.130815 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-03-23 13:49:06.130825 | orchestrator | Sunday 23 March 2025 13:43:58 +0000 (0:00:12.486) 0:00:50.691 ********** 2025-03-23 13:49:06.130835 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:49:06.130845 | orchestrator | 2025-03-23 13:49:06.130855 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-03-23 13:49:06.130869 | orchestrator | Sunday 23 March 2025 13:43:59 +0000 (0:00:01.520) 0:00:52.212 ********** 2025-03-23 13:49:06.130880 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130895 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130906 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130917 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130950 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.130986 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131000 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131018 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131052 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131063 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1091418, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131072 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131081 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131095 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131104 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131113 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131148 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131159 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131168 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131181 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131190 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131199 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131223 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131276 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1091425, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131285 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131294 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131319 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131347 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131358 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131378 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131388 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131397 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131413 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131422 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131466 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131475 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131484 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131499 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131509 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131518 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131563 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131581 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1091420, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131607 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131616 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131625 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131634 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131663 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131678 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131713 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131722 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131759 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131775 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131784 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.131800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131809 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.131818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131827 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.131836 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131845 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131853 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.131862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131894 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.131905 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-23 13:49:06.131914 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.131923 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091424, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131939 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091438, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131948 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091427, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131957 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091423, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131966 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091426, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.131998 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091437, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1488569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.132019 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091422, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.132035 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1091429, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.147857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-23 13:49:06.132048 | orchestrator | 2025-03-23 13:49:06.132064 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-03-23 13:49:06.132073 | orchestrator | Sunday 23 March 2025 13:45:01 +0000 (0:01:02.024) 0:01:54.236 ********** 2025-03-23 13:49:06.132082 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:49:06.132090 | orchestrator | 2025-03-23 13:49:06.132099 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-03-23 13:49:06.132107 | orchestrator | Sunday 23 March 2025 13:45:02 +0000 (0:00:00.442) 0:01:54.679 ********** 2025-03-23 13:49:06.132116 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132133 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132142 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132150 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132159 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:49:06.132167 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132176 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132184 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132201 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132210 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:49:06.132218 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132235 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132252 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132260 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-23 13:49:06.132277 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132294 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132311 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132319 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132336 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132353 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132361 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132378 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132395 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132404 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.132412 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132420 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-03-23 13:49:06.132429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-23 13:49:06.132437 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-03-23 13:49:06.132446 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:49:06.132454 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-23 13:49:06.132463 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-23 13:49:06.132493 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-23 13:49:06.132503 | orchestrator | 2025-03-23 13:49:06.132512 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-03-23 13:49:06.132520 | orchestrator | Sunday 23 March 2025 13:45:03 +0000 (0:00:01.362) 0:01:56.042 ********** 2025-03-23 13:49:06.132544 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132553 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.132562 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132571 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.132579 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132588 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.132597 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132605 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.132614 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132622 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.132631 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-23 13:49:06.132639 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.132648 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-03-23 13:49:06.132656 | orchestrator | 2025-03-23 13:49:06.132665 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-03-23 13:49:06.132674 | orchestrator | Sunday 23 March 2025 13:45:25 +0000 (0:00:21.792) 0:02:17.835 ********** 2025-03-23 13:49:06.132682 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132696 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.132704 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132713 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.132721 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132730 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.132739 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132747 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.132756 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132764 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.132773 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-23 13:49:06.132781 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.132790 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-03-23 13:49:06.132798 | orchestrator | 2025-03-23 13:49:06.132807 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-03-23 13:49:06.132815 | orchestrator | Sunday 23 March 2025 13:45:32 +0000 (0:00:07.349) 0:02:25.184 ********** 2025-03-23 13:49:06.132824 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132833 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.132841 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132850 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.132859 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132867 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.132876 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132884 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.132893 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132902 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.132910 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-23 13:49:06.132919 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.132927 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-03-23 13:49:06.132936 | orchestrator | 2025-03-23 13:49:06.132944 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-03-23 13:49:06.132953 | orchestrator | Sunday 23 March 2025 13:45:36 +0000 (0:00:03.882) 0:02:29.067 ********** 2025-03-23 13:49:06.132962 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:49:06.132970 | orchestrator | 2025-03-23 13:49:06.132979 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-03-23 13:49:06.132987 | orchestrator | Sunday 23 March 2025 13:45:37 +0000 (0:00:00.505) 0:02:29.573 ********** 2025-03-23 13:49:06.132996 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133007 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133016 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133025 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133033 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133042 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133055 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133063 | orchestrator | 2025-03-23 13:49:06.133072 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-03-23 13:49:06.133081 | orchestrator | Sunday 23 March 2025 13:45:38 +0000 (0:00:00.756) 0:02:30.330 ********** 2025-03-23 13:49:06.133089 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133098 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133106 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133115 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133123 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.133132 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.133140 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.133149 | orchestrator | 2025-03-23 13:49:06.133157 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-03-23 13:49:06.133170 | orchestrator | Sunday 23 March 2025 13:45:42 +0000 (0:00:04.075) 0:02:34.405 ********** 2025-03-23 13:49:06.133179 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133187 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133196 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133205 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133219 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133228 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133237 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133246 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133255 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133264 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133273 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133281 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133290 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-23 13:49:06.133298 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133307 | orchestrator | 2025-03-23 13:49:06.133315 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-03-23 13:49:06.133324 | orchestrator | Sunday 23 March 2025 13:45:46 +0000 (0:00:04.530) 0:02:38.936 ********** 2025-03-23 13:49:06.133332 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133341 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133350 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133358 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133367 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133375 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133387 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133396 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133405 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133413 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133422 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-23 13:49:06.133430 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133439 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-03-23 13:49:06.133452 | orchestrator | 2025-03-23 13:49:06.133461 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-03-23 13:49:06.133469 | orchestrator | Sunday 23 March 2025 13:45:52 +0000 (0:00:06.159) 0:02:45.095 ********** 2025-03-23 13:49:06.133478 | orchestrator | [WARNING]: Skipped 2025-03-23 13:49:06.133486 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-03-23 13:49:06.133495 | orchestrator | due to this access issue: 2025-03-23 13:49:06.133503 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-03-23 13:49:06.133512 | orchestrator | not a directory 2025-03-23 13:49:06.133521 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 13:49:06.133564 | orchestrator | 2025-03-23 13:49:06.133574 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-03-23 13:49:06.133583 | orchestrator | Sunday 23 March 2025 13:45:57 +0000 (0:00:04.266) 0:02:49.362 ********** 2025-03-23 13:49:06.133591 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133600 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133608 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133617 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133625 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133634 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133642 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133650 | orchestrator | 2025-03-23 13:49:06.133659 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-03-23 13:49:06.133667 | orchestrator | Sunday 23 March 2025 13:45:59 +0000 (0:00:02.340) 0:02:51.702 ********** 2025-03-23 13:49:06.133676 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133689 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133697 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133706 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133714 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133723 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133731 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133739 | orchestrator | 2025-03-23 13:49:06.133748 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-03-23 13:49:06.133757 | orchestrator | Sunday 23 March 2025 13:46:00 +0000 (0:00:00.907) 0:02:52.610 ********** 2025-03-23 13:49:06.133765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133774 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133782 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133791 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133799 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133808 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133816 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133825 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.133833 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133842 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133851 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133859 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.133868 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-23 13:49:06.133876 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.133885 | orchestrator | 2025-03-23 13:49:06.133893 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-03-23 13:49:06.133906 | orchestrator | Sunday 23 March 2025 13:46:08 +0000 (0:00:07.743) 0:03:00.353 ********** 2025-03-23 13:49:06.133920 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.133929 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:06.133937 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.133946 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:06.133955 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.133963 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:06.133972 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.133980 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:06.133989 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.133997 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:06.134006 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.134014 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:06.134059 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-23 13:49:06.134068 | orchestrator | skipping: [testbed-manager] 2025-03-23 13:49:06.134076 | orchestrator | 2025-03-23 13:49:06.134085 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-03-23 13:49:06.134093 | orchestrator | Sunday 23 March 2025 13:46:13 +0000 (0:00:05.601) 0:03:05.955 ********** 2025-03-23 13:49:06.134102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134173 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 13:49:06.134185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 13:49:06.134207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134350 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-23 13:49:06.134365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134665 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-23 13:49:06.134721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 13:49:06.134737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.134792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.134838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134851 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 13:49:06.134860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 13:49:06.134868 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.134915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 13:49:06.134932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 13:49:06.134940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 13:49:06.134948 | orchestrator | 2025-03-23 13:49:06.134956 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-03-23 13:49:06.134964 | orchestrator | Sunday 23 March 2025 13:46:22 +0000 (0:00:09.213) 0:03:15.168 ********** 2025-03-23 13:49:06.134972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-23 13:49:06.134980 | orchestrator | 2025-03-23 13:49:06.134988 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.134996 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:04.313) 0:03:19.482 ********** 2025-03-23 13:49:06.135004 | orchestrator | 2025-03-23 13:49:06.135012 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135020 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:00.084) 0:03:19.566 ********** 2025-03-23 13:49:06.135028 | orchestrator | 2025-03-23 13:49:06.135036 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135048 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:00.284) 0:03:19.851 ********** 2025-03-23 13:49:06.135057 | orchestrator | 2025-03-23 13:49:06.135064 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135072 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:00.070) 0:03:19.921 ********** 2025-03-23 13:49:06.135080 | orchestrator | 2025-03-23 13:49:06.135088 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135096 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:00.057) 0:03:19.979 ********** 2025-03-23 13:49:06.135104 | orchestrator | 2025-03-23 13:49:06.135112 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135120 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:00.068) 0:03:20.048 ********** 2025-03-23 13:49:06.135128 | orchestrator | 2025-03-23 13:49:06.135136 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-23 13:49:06.135147 | orchestrator | Sunday 23 March 2025 13:46:28 +0000 (0:00:00.329) 0:03:20.377 ********** 2025-03-23 13:49:06.135155 | orchestrator | 2025-03-23 13:49:06.135163 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-03-23 13:49:06.135171 | orchestrator | Sunday 23 March 2025 13:46:28 +0000 (0:00:00.186) 0:03:20.563 ********** 2025-03-23 13:49:06.135178 | orchestrator | changed: [testbed-manager] 2025-03-23 13:49:06.135186 | orchestrator | 2025-03-23 13:49:06.135194 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-03-23 13:49:06.135202 | orchestrator | Sunday 23 March 2025 13:46:49 +0000 (0:00:21.319) 0:03:41.882 ********** 2025-03-23 13:49:06.135210 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.135218 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.135226 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.135234 | orchestrator | changed: [testbed-manager] 2025-03-23 13:49:06.135241 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:06.135249 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:06.135257 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:06.135265 | orchestrator | 2025-03-23 13:49:06.135273 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-03-23 13:49:06.135284 | orchestrator | Sunday 23 March 2025 13:47:18 +0000 (0:00:29.229) 0:04:11.111 ********** 2025-03-23 13:49:06.135292 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.135304 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.135311 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.135319 | orchestrator | 2025-03-23 13:49:06.135327 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-03-23 13:49:06.135335 | orchestrator | Sunday 23 March 2025 13:47:34 +0000 (0:00:15.992) 0:04:27.104 ********** 2025-03-23 13:49:06.135343 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.135351 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.135359 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.135367 | orchestrator | 2025-03-23 13:49:06.135375 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-03-23 13:49:06.135383 | orchestrator | Sunday 23 March 2025 13:47:46 +0000 (0:00:11.400) 0:04:38.505 ********** 2025-03-23 13:49:06.135391 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:06.135399 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.135407 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.135415 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.135422 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:06.135430 | orchestrator | changed: [testbed-manager] 2025-03-23 13:49:06.135438 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:06.135446 | orchestrator | 2025-03-23 13:49:06.135454 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-03-23 13:49:06.135462 | orchestrator | Sunday 23 March 2025 13:48:06 +0000 (0:00:19.839) 0:04:58.344 ********** 2025-03-23 13:49:06.135469 | orchestrator | changed: [testbed-manager] 2025-03-23 13:49:06.135481 | orchestrator | 2025-03-23 13:49:06.135490 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-03-23 13:49:06.135497 | orchestrator | Sunday 23 March 2025 13:48:23 +0000 (0:00:17.491) 0:05:15.836 ********** 2025-03-23 13:49:06.135505 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:06.135513 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:06.135521 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:06.135560 | orchestrator | 2025-03-23 13:49:06.135569 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-03-23 13:49:06.135578 | orchestrator | Sunday 23 March 2025 13:48:37 +0000 (0:00:13.814) 0:05:29.651 ********** 2025-03-23 13:49:06.135586 | orchestrator | changed: [testbed-manager] 2025-03-23 13:49:06.135594 | orchestrator | 2025-03-23 13:49:06.135602 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-03-23 13:49:06.135610 | orchestrator | Sunday 23 March 2025 13:48:47 +0000 (0:00:10.333) 0:05:39.984 ********** 2025-03-23 13:49:06.135618 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:06.135625 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:06.135633 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:06.135641 | orchestrator | 2025-03-23 13:49:06.135649 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:49:06.135657 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-03-23 13:49:06.135665 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-23 13:49:06.135672 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-23 13:49:06.135679 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-23 13:49:06.135686 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-23 13:49:06.135693 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-23 13:49:06.135700 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-23 13:49:06.135707 | orchestrator | 2025-03-23 13:49:06.135714 | orchestrator | 2025-03-23 13:49:06.135721 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:49:06.135728 | orchestrator | Sunday 23 March 2025 13:49:04 +0000 (0:00:17.121) 0:05:57.105 ********** 2025-03-23 13:49:06.135735 | orchestrator | =============================================================================== 2025-03-23 13:49:06.135742 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 62.02s 2025-03-23 13:49:06.135749 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 29.23s 2025-03-23 13:49:06.135756 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 21.79s 2025-03-23 13:49:06.135763 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.32s 2025-03-23 13:49:06.135770 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.84s 2025-03-23 13:49:06.135777 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 17.49s 2025-03-23 13:49:06.135784 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 17.12s 2025-03-23 13:49:06.135791 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 15.99s 2025-03-23 13:49:06.135801 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.81s 2025-03-23 13:49:06.135808 | orchestrator | prometheus : Copying over config.json files ---------------------------- 12.49s 2025-03-23 13:49:06.135822 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.40s 2025-03-23 13:49:09.167119 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.33s 2025-03-23 13:49:09.167241 | orchestrator | prometheus : Check prometheus containers -------------------------------- 9.21s 2025-03-23 13:49:09.167260 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 8.02s 2025-03-23 13:49:09.167275 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 7.90s 2025-03-23 13:49:09.167289 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 7.74s 2025-03-23 13:49:09.167304 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 7.35s 2025-03-23 13:49:09.167318 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 6.16s 2025-03-23 13:49:09.167332 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 5.60s 2025-03-23 13:49:09.167346 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.17s 2025-03-23 13:49:09.167361 | orchestrator | 2025-03-23 13:49:06 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:09.167376 | orchestrator | 2025-03-23 13:49:06 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:09.167390 | orchestrator | 2025-03-23 13:49:06 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:09.167404 | orchestrator | 2025-03-23 13:49:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:09.167436 | orchestrator | 2025-03-23 13:49:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:09.171865 | orchestrator | 2025-03-23 13:49:09 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:09.171903 | orchestrator | 2025-03-23 13:49:09 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:09.172298 | orchestrator | 2025-03-23 13:49:09 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:09.175662 | orchestrator | 2025-03-23 13:49:09 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:12.229084 | orchestrator | 2025-03-23 13:49:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:12.229208 | orchestrator | 2025-03-23 13:49:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:12.231441 | orchestrator | 2025-03-23 13:49:12 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:12.232454 | orchestrator | 2025-03-23 13:49:12 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:12.234620 | orchestrator | 2025-03-23 13:49:12 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:12.236112 | orchestrator | 2025-03-23 13:49:12 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:12.236250 | orchestrator | 2025-03-23 13:49:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:15.292414 | orchestrator | 2025-03-23 13:49:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:15.293203 | orchestrator | 2025-03-23 13:49:15 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:15.295089 | orchestrator | 2025-03-23 13:49:15 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:15.297190 | orchestrator | 2025-03-23 13:49:15 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:15.301948 | orchestrator | 2025-03-23 13:49:15 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:18.340440 | orchestrator | 2025-03-23 13:49:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:18.340589 | orchestrator | 2025-03-23 13:49:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:18.341095 | orchestrator | 2025-03-23 13:49:18 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:18.342442 | orchestrator | 2025-03-23 13:49:18 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:18.343986 | orchestrator | 2025-03-23 13:49:18 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:18.346087 | orchestrator | 2025-03-23 13:49:18 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:21.399418 | orchestrator | 2025-03-23 13:49:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:21.399608 | orchestrator | 2025-03-23 13:49:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:21.402092 | orchestrator | 2025-03-23 13:49:21 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:21.405301 | orchestrator | 2025-03-23 13:49:21 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:21.407693 | orchestrator | 2025-03-23 13:49:21 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:21.409280 | orchestrator | 2025-03-23 13:49:21 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state STARTED 2025-03-23 13:49:21.409614 | orchestrator | 2025-03-23 13:49:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:24.460202 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:24.460564 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:24.463748 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state STARTED 2025-03-23 13:49:24.465607 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:24.468136 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:24.468752 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:24.474865 | orchestrator | 2025-03-23 13:49:24.475068 | orchestrator | 2025-03-23 13:49:24.475396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:49:24.475415 | orchestrator | 2025-03-23 13:49:24.475431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:49:24.475447 | orchestrator | Sunday 23 March 2025 13:45:30 +0000 (0:00:00.359) 0:00:00.359 ********** 2025-03-23 13:49:24.475462 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:49:24.475479 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:49:24.475495 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:49:24.475509 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:49:24.475550 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:49:24.475565 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:49:24.475580 | orchestrator | 2025-03-23 13:49:24.475594 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:49:24.475608 | orchestrator | Sunday 23 March 2025 13:45:31 +0000 (0:00:00.611) 0:00:00.971 ********** 2025-03-23 13:49:24.475622 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-03-23 13:49:24.475637 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-03-23 13:49:24.475984 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-03-23 13:49:24.476009 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-03-23 13:49:24.476025 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-03-23 13:49:24.476040 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-03-23 13:49:24.476055 | orchestrator | 2025-03-23 13:49:24.476071 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-03-23 13:49:24.476157 | orchestrator | 2025-03-23 13:49:24.476176 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-23 13:49:24.476190 | orchestrator | Sunday 23 March 2025 13:45:31 +0000 (0:00:00.761) 0:00:01.733 ********** 2025-03-23 13:49:24.476237 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:49:24.476300 | orchestrator | 2025-03-23 13:49:24.476316 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-03-23 13:49:24.476331 | orchestrator | Sunday 23 March 2025 13:45:33 +0000 (0:00:01.221) 0:00:02.955 ********** 2025-03-23 13:49:24.476347 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-03-23 13:49:24.476361 | orchestrator | 2025-03-23 13:49:24.476376 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-03-23 13:49:24.476423 | orchestrator | Sunday 23 March 2025 13:45:36 +0000 (0:00:03.699) 0:00:06.654 ********** 2025-03-23 13:49:24.476750 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-03-23 13:49:24.476771 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-03-23 13:49:24.476785 | orchestrator | 2025-03-23 13:49:24.476799 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-03-23 13:49:24.476813 | orchestrator | Sunday 23 March 2025 13:45:44 +0000 (0:00:07.674) 0:00:14.328 ********** 2025-03-23 13:49:24.476827 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:49:24.476842 | orchestrator | 2025-03-23 13:49:24.476856 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-03-23 13:49:24.476870 | orchestrator | Sunday 23 March 2025 13:45:48 +0000 (0:00:04.098) 0:00:18.427 ********** 2025-03-23 13:49:24.476883 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:49:24.476897 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-03-23 13:49:24.476911 | orchestrator | 2025-03-23 13:49:24.476925 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-03-23 13:49:24.476939 | orchestrator | Sunday 23 March 2025 13:45:52 +0000 (0:00:04.328) 0:00:22.756 ********** 2025-03-23 13:49:24.476953 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:49:24.476967 | orchestrator | 2025-03-23 13:49:24.476981 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-03-23 13:49:24.476995 | orchestrator | Sunday 23 March 2025 13:45:56 +0000 (0:00:03.904) 0:00:26.660 ********** 2025-03-23 13:49:24.477009 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-03-23 13:49:24.477023 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-03-23 13:49:24.477037 | orchestrator | 2025-03-23 13:49:24.477066 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-03-23 13:49:24.477080 | orchestrator | Sunday 23 March 2025 13:46:06 +0000 (0:00:09.841) 0:00:36.502 ********** 2025-03-23 13:49:24.477217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.477259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.477290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.477320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.477377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.477442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.477719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.477858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.477889 | orchestrator | 2025-03-23 13:49:24.477904 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-23 13:49:24.477918 | orchestrator | Sunday 23 March 2025 13:46:10 +0000 (0:00:03.333) 0:00:39.836 ********** 2025-03-23 13:49:24.477932 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.477946 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.477960 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.477974 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:49:24.477988 | orchestrator | 2025-03-23 13:49:24.478002 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-03-23 13:49:24.478070 | orchestrator | Sunday 23 March 2025 13:46:13 +0000 (0:00:03.116) 0:00:42.953 ********** 2025-03-23 13:49:24.478088 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-03-23 13:49:24.478102 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-03-23 13:49:24.478116 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-03-23 13:49:24.478130 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-03-23 13:49:24.478144 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-03-23 13:49:24.478156 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-03-23 13:49:24.478169 | orchestrator | 2025-03-23 13:49:24.478181 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-03-23 13:49:24.478193 | orchestrator | Sunday 23 March 2025 13:46:18 +0000 (0:00:05.447) 0:00:48.400 ********** 2025-03-23 13:49:24.478207 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478230 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478279 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478294 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478308 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478345 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-23 13:49:24.478359 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478403 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478418 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478432 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478454 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478493 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-23 13:49:24.478508 | orchestrator | 2025-03-23 13:49:24.478542 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-03-23 13:49:24.478556 | orchestrator | Sunday 23 March 2025 13:46:25 +0000 (0:00:06.448) 0:00:54.849 ********** 2025-03-23 13:49:24.478569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:49:24.478582 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:49:24.478594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:49:24.478607 | orchestrator | 2025-03-23 13:49:24.478619 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-03-23 13:49:24.478632 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:02.759) 0:00:57.608 ********** 2025-03-23 13:49:24.478644 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-03-23 13:49:24.478657 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-03-23 13:49:24.478670 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-03-23 13:49:24.478682 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-03-23 13:49:24.478695 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-03-23 13:49:24.478950 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-03-23 13:49:24.478965 | orchestrator | 2025-03-23 13:49:24.478978 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-03-23 13:49:24.478990 | orchestrator | Sunday 23 March 2025 13:46:32 +0000 (0:00:04.167) 0:01:01.776 ********** 2025-03-23 13:49:24.479003 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-03-23 13:49:24.479015 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-03-23 13:49:24.479028 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-03-23 13:49:24.479040 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-03-23 13:49:24.479060 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-03-23 13:49:24.479073 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-03-23 13:49:24.479085 | orchestrator | 2025-03-23 13:49:24.479098 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-03-23 13:49:24.479110 | orchestrator | Sunday 23 March 2025 13:46:33 +0000 (0:00:01.343) 0:01:03.120 ********** 2025-03-23 13:49:24.479122 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.479135 | orchestrator | 2025-03-23 13:49:24.479148 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-03-23 13:49:24.479160 | orchestrator | Sunday 23 March 2025 13:46:33 +0000 (0:00:00.304) 0:01:03.424 ********** 2025-03-23 13:49:24.479173 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.479185 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.479197 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.479210 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.479223 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.479235 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.479247 | orchestrator | 2025-03-23 13:49:24.479260 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-23 13:49:24.479272 | orchestrator | Sunday 23 March 2025 13:46:35 +0000 (0:00:01.661) 0:01:05.085 ********** 2025-03-23 13:49:24.479286 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:49:24.479299 | orchestrator | 2025-03-23 13:49:24.479312 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-03-23 13:49:24.479325 | orchestrator | Sunday 23 March 2025 13:46:37 +0000 (0:00:02.129) 0:01:07.215 ********** 2025-03-23 13:49:24.479338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.479387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.479403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.479423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.479596 | orchestrator | 2025-03-23 13:49:24.479609 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-03-23 13:49:24.479622 | orchestrator | Sunday 23 March 2025 13:46:42 +0000 (0:00:04.719) 0:01:11.934 ********** 2025-03-23 13:49:24.479664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.479685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.479712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.479802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479814 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.479827 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.479840 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.479853 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.479866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479893 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.479932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.479966 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.479979 | orchestrator | 2025-03-23 13:49:24.479991 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-03-23 13:49:24.480004 | orchestrator | Sunday 23 March 2025 13:46:45 +0000 (0:00:03.738) 0:01:15.673 ********** 2025-03-23 13:49:24.480017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480104 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.480117 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.480130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480156 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.480169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480244 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.480257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480270 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.480283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480308 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.480321 | orchestrator | 2025-03-23 13:49:24.480333 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-03-23 13:49:24.480346 | orchestrator | Sunday 23 March 2025 13:46:51 +0000 (0:00:05.533) 0:01:21.206 ********** 2025-03-23 13:49:24.480359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.480433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.480446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.480520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.480798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.480861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.480888 | orchestrator | 2025-03-23 13:49:24.480900 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-03-23 13:49:24.480919 | orchestrator | Sunday 23 March 2025 13:46:58 +0000 (0:00:07.233) 0:01:28.439 ********** 2025-03-23 13:49:24.480932 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-23 13:49:24.480944 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.480957 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-23 13:49:24.480969 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.480981 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-23 13:49:24.480994 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.481014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-23 13:49:24.481026 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-23 13:49:24.481039 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-23 13:49:24.481051 | orchestrator | 2025-03-23 13:49:24.481063 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-03-23 13:49:24.481075 | orchestrator | Sunday 23 March 2025 13:47:04 +0000 (0:00:05.733) 0:01:34.173 ********** 2025-03-23 13:49:24.481087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.481182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.481198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.481209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.481399 | orchestrator | 2025-03-23 13:49:24.481412 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-03-23 13:49:24.481423 | orchestrator | Sunday 23 March 2025 13:47:25 +0000 (0:00:20.847) 0:01:55.020 ********** 2025-03-23 13:49:24.481433 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.481444 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.481454 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.481464 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:24.481474 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:24.481484 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:24.481494 | orchestrator | 2025-03-23 13:49:24.481504 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-03-23 13:49:24.481514 | orchestrator | Sunday 23 March 2025 13:47:32 +0000 (0:00:06.941) 0:02:01.962 ********** 2025-03-23 13:49:24.481541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481642 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.481652 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.481663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481715 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.481726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481777 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.481788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481830 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.481845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.481861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.481893 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.481903 | orchestrator | 2025-03-23 13:49:24.481913 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-03-23 13:49:24.481924 | orchestrator | Sunday 23 March 2025 13:47:33 +0000 (0:00:01.629) 0:02:03.591 ********** 2025-03-23 13:49:24.481934 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.481944 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.481954 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.481964 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.481974 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.481984 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.481993 | orchestrator | 2025-03-23 13:49:24.482004 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-03-23 13:49:24.482038 | orchestrator | Sunday 23 March 2025 13:47:35 +0000 (0:00:01.367) 0:02:04.959 ********** 2025-03-23 13:49:24.482056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.482073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.482094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 13:49:24.482116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.482147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.482158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 13:49:24.482168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 13:49:24.482337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-23 13:49:24.482358 | orchestrator | 2025-03-23 13:49:24.482368 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-23 13:49:24.482378 | orchestrator | Sunday 23 March 2025 13:47:39 +0000 (0:00:04.546) 0:02:09.506 ********** 2025-03-23 13:49:24.482389 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:24.482399 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:49:24.482409 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:49:24.482419 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:49:24.482429 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:49:24.482439 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:49:24.482449 | orchestrator | 2025-03-23 13:49:24.482459 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-03-23 13:49:24.482469 | orchestrator | Sunday 23 March 2025 13:47:41 +0000 (0:00:01.893) 0:02:11.400 ********** 2025-03-23 13:49:24.482479 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:24.482489 | orchestrator | 2025-03-23 13:49:24.482499 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-03-23 13:49:24.482509 | orchestrator | Sunday 23 March 2025 13:47:44 +0000 (0:00:03.066) 0:02:14.466 ********** 2025-03-23 13:49:24.482519 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:24.482545 | orchestrator | 2025-03-23 13:49:24.482555 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-03-23 13:49:24.482565 | orchestrator | Sunday 23 March 2025 13:47:47 +0000 (0:00:02.717) 0:02:17.184 ********** 2025-03-23 13:49:24.482575 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:24.482585 | orchestrator | 2025-03-23 13:49:24.482595 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482610 | orchestrator | Sunday 23 March 2025 13:48:08 +0000 (0:00:20.911) 0:02:38.095 ********** 2025-03-23 13:49:24.482621 | orchestrator | 2025-03-23 13:49:24.482631 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482640 | orchestrator | Sunday 23 March 2025 13:48:08 +0000 (0:00:00.271) 0:02:38.367 ********** 2025-03-23 13:49:24.482650 | orchestrator | 2025-03-23 13:49:24.482660 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482670 | orchestrator | Sunday 23 March 2025 13:48:08 +0000 (0:00:00.373) 0:02:38.741 ********** 2025-03-23 13:49:24.482680 | orchestrator | 2025-03-23 13:49:24.482690 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482700 | orchestrator | Sunday 23 March 2025 13:48:09 +0000 (0:00:00.105) 0:02:38.846 ********** 2025-03-23 13:49:24.482710 | orchestrator | 2025-03-23 13:49:24.482720 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482730 | orchestrator | Sunday 23 March 2025 13:48:09 +0000 (0:00:00.171) 0:02:39.018 ********** 2025-03-23 13:49:24.482740 | orchestrator | 2025-03-23 13:49:24.482750 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-23 13:49:24.482760 | orchestrator | Sunday 23 March 2025 13:48:09 +0000 (0:00:00.159) 0:02:39.178 ********** 2025-03-23 13:49:24.482770 | orchestrator | 2025-03-23 13:49:24.482780 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-03-23 13:49:24.482794 | orchestrator | Sunday 23 March 2025 13:48:09 +0000 (0:00:00.433) 0:02:39.612 ********** 2025-03-23 13:49:24.482804 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:24.482814 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:24.482824 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:24.482834 | orchestrator | 2025-03-23 13:49:24.482844 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-03-23 13:49:24.482854 | orchestrator | Sunday 23 March 2025 13:48:30 +0000 (0:00:20.597) 0:03:00.210 ********** 2025-03-23 13:49:24.482864 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:49:24.482874 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:49:24.482884 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:49:24.482894 | orchestrator | 2025-03-23 13:49:24.482904 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-03-23 13:49:24.482918 | orchestrator | Sunday 23 March 2025 13:48:43 +0000 (0:00:12.909) 0:03:13.119 ********** 2025-03-23 13:49:27.532493 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:27.532645 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:27.532662 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:27.532676 | orchestrator | 2025-03-23 13:49:27.532691 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-03-23 13:49:27.532706 | orchestrator | Sunday 23 March 2025 13:49:08 +0000 (0:00:25.151) 0:03:38.270 ********** 2025-03-23 13:49:27.532720 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:49:27.532734 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:49:27.532748 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:49:27.532762 | orchestrator | 2025-03-23 13:49:27.532776 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-03-23 13:49:27.532791 | orchestrator | Sunday 23 March 2025 13:49:20 +0000 (0:00:12.453) 0:03:50.724 ********** 2025-03-23 13:49:27.532805 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:49:27.532819 | orchestrator | 2025-03-23 13:49:27.532832 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:49:27.532847 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-23 13:49:27.532863 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-23 13:49:27.532877 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-23 13:49:27.532916 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:49:27.532931 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:49:27.532945 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:49:27.532958 | orchestrator | 2025-03-23 13:49:27.532972 | orchestrator | 2025-03-23 13:49:27.532986 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:49:27.533000 | orchestrator | Sunday 23 March 2025 13:49:21 +0000 (0:00:00.695) 0:03:51.420 ********** 2025-03-23 13:49:27.533014 | orchestrator | =============================================================================== 2025-03-23 13:49:27.533028 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.15s 2025-03-23 13:49:27.533044 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.91s 2025-03-23 13:49:27.533059 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 20.85s 2025-03-23 13:49:27.533076 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.60s 2025-03-23 13:49:27.533092 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.91s 2025-03-23 13:49:27.533107 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.45s 2025-03-23 13:49:27.533123 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.84s 2025-03-23 13:49:27.533138 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.67s 2025-03-23 13:49:27.533153 | orchestrator | cinder : Copying over config.json files for services -------------------- 7.23s 2025-03-23 13:49:27.533168 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 6.94s 2025-03-23 13:49:27.533184 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.45s 2025-03-23 13:49:27.533199 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 5.73s 2025-03-23 13:49:27.533214 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 5.53s 2025-03-23 13:49:27.533230 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.45s 2025-03-23 13:49:27.533245 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.72s 2025-03-23 13:49:27.533260 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.55s 2025-03-23 13:49:27.533276 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.33s 2025-03-23 13:49:27.533291 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.17s 2025-03-23 13:49:27.533440 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.10s 2025-03-23 13:49:27.533465 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.90s 2025-03-23 13:49:27.533480 | orchestrator | 2025-03-23 13:49:24 | INFO  | Task 3696e34d-06a1-47dc-bb2a-c3a8b0899827 is in state SUCCESS 2025-03-23 13:49:27.533494 | orchestrator | 2025-03-23 13:49:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:27.533549 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:27.535055 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:27.535091 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state STARTED 2025-03-23 13:49:27.535947 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:27.538321 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:27.539741 | orchestrator | 2025-03-23 13:49:27 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:27.539828 | orchestrator | 2025-03-23 13:49:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:30.595295 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:30.596761 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:30.597035 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state STARTED 2025-03-23 13:49:30.599038 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:30.599700 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:30.600640 | orchestrator | 2025-03-23 13:49:30 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:33.636566 | orchestrator | 2025-03-23 13:49:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:33.636689 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:33.637753 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:33.638909 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state STARTED 2025-03-23 13:49:33.642109 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:33.642938 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:33.643941 | orchestrator | 2025-03-23 13:49:33 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:33.644072 | orchestrator | 2025-03-23 13:49:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:36.697692 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:39.737799 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:39.737913 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state STARTED 2025-03-23 13:49:39.737932 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:39.737946 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:39.737961 | orchestrator | 2025-03-23 13:49:36 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:39.737975 | orchestrator | 2025-03-23 13:49:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:39.738004 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:39.738465 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:39.738500 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task dfd0b26d-1b95-426b-b509-8d1fb0875907 is in state SUCCESS 2025-03-23 13:49:39.739050 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:39.739873 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:39.740669 | orchestrator | 2025-03-23 13:49:39 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:42.791821 | orchestrator | 2025-03-23 13:49:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:42.791948 | orchestrator | 2025-03-23 13:49:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:42.793687 | orchestrator | 2025-03-23 13:49:42 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:42.794987 | orchestrator | 2025-03-23 13:49:42 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:42.799099 | orchestrator | 2025-03-23 13:49:42 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:42.801041 | orchestrator | 2025-03-23 13:49:42 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:42.801337 | orchestrator | 2025-03-23 13:49:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:45.843377 | orchestrator | 2025-03-23 13:49:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:45.844056 | orchestrator | 2025-03-23 13:49:45 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:45.845140 | orchestrator | 2025-03-23 13:49:45 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:45.849808 | orchestrator | 2025-03-23 13:49:45 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:48.923395 | orchestrator | 2025-03-23 13:49:45 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:48.923494 | orchestrator | 2025-03-23 13:49:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:48.923554 | orchestrator | 2025-03-23 13:49:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:48.926256 | orchestrator | 2025-03-23 13:49:48 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:48.929190 | orchestrator | 2025-03-23 13:49:48 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:48.931661 | orchestrator | 2025-03-23 13:49:48 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:48.933565 | orchestrator | 2025-03-23 13:49:48 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:48.934099 | orchestrator | 2025-03-23 13:49:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:51.988097 | orchestrator | 2025-03-23 13:49:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:51.990376 | orchestrator | 2025-03-23 13:49:51 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:51.991313 | orchestrator | 2025-03-23 13:49:51 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:51.992473 | orchestrator | 2025-03-23 13:49:51 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:51.993765 | orchestrator | 2025-03-23 13:49:51 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:51.994125 | orchestrator | 2025-03-23 13:49:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:55.058576 | orchestrator | 2025-03-23 13:49:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:55.060960 | orchestrator | 2025-03-23 13:49:55 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:55.061969 | orchestrator | 2025-03-23 13:49:55 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:55.064301 | orchestrator | 2025-03-23 13:49:55 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:55.065730 | orchestrator | 2025-03-23 13:49:55 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:58.120953 | orchestrator | 2025-03-23 13:49:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:49:58.121072 | orchestrator | 2025-03-23 13:49:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:49:58.123313 | orchestrator | 2025-03-23 13:49:58 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:49:58.125429 | orchestrator | 2025-03-23 13:49:58 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:49:58.126893 | orchestrator | 2025-03-23 13:49:58 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:49:58.127401 | orchestrator | 2025-03-23 13:49:58 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:49:58.127684 | orchestrator | 2025-03-23 13:49:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:01.183184 | orchestrator | 2025-03-23 13:50:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:01.185621 | orchestrator | 2025-03-23 13:50:01 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:01.188076 | orchestrator | 2025-03-23 13:50:01 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:01.189655 | orchestrator | 2025-03-23 13:50:01 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:50:01.191296 | orchestrator | 2025-03-23 13:50:01 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:04.245627 | orchestrator | 2025-03-23 13:50:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:04.245759 | orchestrator | 2025-03-23 13:50:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:04.250315 | orchestrator | 2025-03-23 13:50:04 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:04.254014 | orchestrator | 2025-03-23 13:50:04 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:04.255189 | orchestrator | 2025-03-23 13:50:04 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state STARTED 2025-03-23 13:50:04.259025 | orchestrator | 2025-03-23 13:50:04 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:07.300981 | orchestrator | 2025-03-23 13:50:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:07.301095 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:07.301330 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:07.302107 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:07.303858 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:07.304609 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task 70da9f2e-cc3e-4f6c-9b18-ed9213e0dc6d is in state SUCCESS 2025-03-23 13:50:07.306452 | orchestrator | 2025-03-23 13:50:07.306486 | orchestrator | None 2025-03-23 13:50:07.306500 | orchestrator | 2025-03-23 13:50:07.306569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:50:07.306585 | orchestrator | 2025-03-23 13:50:07.306600 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:50:07.306629 | orchestrator | Sunday 23 March 2025 13:45:12 +0000 (0:00:00.374) 0:00:00.374 ********** 2025-03-23 13:50:07.306644 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:50:07.306660 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:50:07.306674 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:50:07.306688 | orchestrator | 2025-03-23 13:50:07.306971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:50:07.306994 | orchestrator | Sunday 23 March 2025 13:45:12 +0000 (0:00:00.547) 0:00:00.922 ********** 2025-03-23 13:50:07.307008 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-03-23 13:50:07.307022 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-03-23 13:50:07.307036 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-03-23 13:50:07.307050 | orchestrator | 2025-03-23 13:50:07.307064 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-03-23 13:50:07.307077 | orchestrator | 2025-03-23 13:50:07.307091 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-23 13:50:07.307105 | orchestrator | Sunday 23 March 2025 13:45:13 +0000 (0:00:00.351) 0:00:01.273 ********** 2025-03-23 13:50:07.307119 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:50:07.307134 | orchestrator | 2025-03-23 13:50:07.307148 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-03-23 13:50:07.307162 | orchestrator | Sunday 23 March 2025 13:45:13 +0000 (0:00:00.837) 0:00:02.110 ********** 2025-03-23 13:50:07.307176 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-03-23 13:50:07.307190 | orchestrator | 2025-03-23 13:50:07.307203 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-03-23 13:50:07.307217 | orchestrator | Sunday 23 March 2025 13:45:18 +0000 (0:00:04.086) 0:00:06.197 ********** 2025-03-23 13:50:07.307231 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-03-23 13:50:07.307246 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-03-23 13:50:07.307260 | orchestrator | 2025-03-23 13:50:07.307274 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-03-23 13:50:07.307287 | orchestrator | Sunday 23 March 2025 13:45:25 +0000 (0:00:07.267) 0:00:13.465 ********** 2025-03-23 13:50:07.307301 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:50:07.307315 | orchestrator | 2025-03-23 13:50:07.307329 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-03-23 13:50:07.307343 | orchestrator | Sunday 23 March 2025 13:45:29 +0000 (0:00:04.048) 0:00:17.513 ********** 2025-03-23 13:50:07.307356 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:50:07.307370 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-03-23 13:50:07.307384 | orchestrator | 2025-03-23 13:50:07.307398 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-03-23 13:50:07.307412 | orchestrator | Sunday 23 March 2025 13:45:33 +0000 (0:00:04.521) 0:00:22.035 ********** 2025-03-23 13:50:07.307425 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:50:07.307439 | orchestrator | 2025-03-23 13:50:07.307453 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-03-23 13:50:07.307467 | orchestrator | Sunday 23 March 2025 13:45:37 +0000 (0:00:03.738) 0:00:25.773 ********** 2025-03-23 13:50:07.307481 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-03-23 13:50:07.307494 | orchestrator | 2025-03-23 13:50:07.307539 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-03-23 13:50:07.307554 | orchestrator | Sunday 23 March 2025 13:45:42 +0000 (0:00:05.046) 0:00:30.820 ********** 2025-03-23 13:50:07.307597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.307618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.307634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.307667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.307685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.307716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.307733 | orchestrator | 2025-03-23 13:50:07.307747 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-23 13:50:07.307761 | orchestrator | Sunday 23 March 2025 13:45:51 +0000 (0:00:08.935) 0:00:39.755 ********** 2025-03-23 13:50:07.307776 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:50:07.307790 | orchestrator | 2025-03-23 13:50:07.307804 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-03-23 13:50:07.307819 | orchestrator | Sunday 23 March 2025 13:45:52 +0000 (0:00:00.628) 0:00:40.384 ********** 2025-03-23 13:50:07.307832 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.307847 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:50:07.307860 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:50:07.307874 | orchestrator | 2025-03-23 13:50:07.307888 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-03-23 13:50:07.307902 | orchestrator | Sunday 23 March 2025 13:46:08 +0000 (0:00:16.167) 0:00:56.551 ********** 2025-03-23 13:50:07.307917 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.307937 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.307951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.307965 | orchestrator | 2025-03-23 13:50:07.307979 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-03-23 13:50:07.307993 | orchestrator | Sunday 23 March 2025 13:46:12 +0000 (0:00:04.109) 0:01:00.661 ********** 2025-03-23 13:50:07.308007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.308020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.308034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-23 13:50:07.308048 | orchestrator | 2025-03-23 13:50:07.308070 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-03-23 13:50:07.308085 | orchestrator | Sunday 23 March 2025 13:46:14 +0000 (0:00:02.384) 0:01:03.046 ********** 2025-03-23 13:50:07.308099 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:50:07.308117 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:50:07.308132 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:50:07.308145 | orchestrator | 2025-03-23 13:50:07.308159 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-03-23 13:50:07.308172 | orchestrator | Sunday 23 March 2025 13:46:16 +0000 (0:00:01.428) 0:01:04.474 ********** 2025-03-23 13:50:07.308186 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.308200 | orchestrator | 2025-03-23 13:50:07.308214 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-03-23 13:50:07.308227 | orchestrator | Sunday 23 March 2025 13:46:17 +0000 (0:00:00.977) 0:01:05.451 ********** 2025-03-23 13:50:07.308241 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.308260 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.308274 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.308287 | orchestrator | 2025-03-23 13:50:07.308301 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-23 13:50:07.308315 | orchestrator | Sunday 23 March 2025 13:46:17 +0000 (0:00:00.662) 0:01:06.114 ********** 2025-03-23 13:50:07.308329 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:50:07.308342 | orchestrator | 2025-03-23 13:50:07.308356 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-03-23 13:50:07.308370 | orchestrator | Sunday 23 March 2025 13:46:19 +0000 (0:00:01.810) 0:01:07.924 ********** 2025-03-23 13:50:07.308392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.308415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.308439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.308455 | orchestrator | 2025-03-23 13:50:07.308469 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-03-23 13:50:07.308489 | orchestrator | Sunday 23 March 2025 13:46:27 +0000 (0:00:08.166) 0:01:16.091 ********** 2025-03-23 13:50:07.308504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308538 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.308562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308578 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.308592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308616 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.308630 | orchestrator | 2025-03-23 13:50:07.308644 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-03-23 13:50:07.308658 | orchestrator | Sunday 23 March 2025 13:46:32 +0000 (0:00:04.737) 0:01:20.828 ********** 2025-03-23 13:50:07.308679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308695 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.308709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308731 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.308746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 13:50:07.308761 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.308774 | orchestrator | 2025-03-23 13:50:07.308789 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-03-23 13:50:07.308803 | orchestrator | Sunday 23 March 2025 13:46:37 +0000 (0:00:04.940) 0:01:25.769 ********** 2025-03-23 13:50:07.308817 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.308831 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.308845 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.308859 | orchestrator | 2025-03-23 13:50:07.308878 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-03-23 13:50:07.308892 | orchestrator | Sunday 23 March 2025 13:46:46 +0000 (0:00:08.655) 0:01:34.424 ********** 2025-03-23 13:50:07.308907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.308930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.308955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.308987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.309010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.309034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.309049 | orchestrator | 2025-03-23 13:50:07.309063 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-03-23 13:50:07.309077 | orchestrator | Sunday 23 March 2025 13:47:01 +0000 (0:00:15.194) 0:01:49.618 ********** 2025-03-23 13:50:07.309091 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:50:07.309106 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:50:07.309120 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.309134 | orchestrator | 2025-03-23 13:50:07.309148 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-03-23 13:50:07.309162 | orchestrator | Sunday 23 March 2025 13:47:32 +0000 (0:00:31.091) 0:02:20.710 ********** 2025-03-23 13:50:07.309176 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309190 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309204 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309218 | orchestrator | 2025-03-23 13:50:07.309232 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-03-23 13:50:07.309246 | orchestrator | Sunday 23 March 2025 13:47:45 +0000 (0:00:12.897) 0:02:33.608 ********** 2025-03-23 13:50:07.309260 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309274 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309288 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309302 | orchestrator | 2025-03-23 13:50:07.309316 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-03-23 13:50:07.309330 | orchestrator | Sunday 23 March 2025 13:47:59 +0000 (0:00:14.201) 0:02:47.809 ********** 2025-03-23 13:50:07.309351 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309364 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309378 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309392 | orchestrator | 2025-03-23 13:50:07.309406 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-03-23 13:50:07.309419 | orchestrator | Sunday 23 March 2025 13:48:11 +0000 (0:00:12.168) 0:02:59.977 ********** 2025-03-23 13:50:07.309433 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309452 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309466 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309480 | orchestrator | 2025-03-23 13:50:07.309494 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-03-23 13:50:07.309549 | orchestrator | Sunday 23 March 2025 13:48:21 +0000 (0:00:10.136) 0:03:10.114 ********** 2025-03-23 13:50:07.309566 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309580 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309593 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309607 | orchestrator | 2025-03-23 13:50:07.309621 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-03-23 13:50:07.309635 | orchestrator | Sunday 23 March 2025 13:48:22 +0000 (0:00:00.483) 0:03:10.597 ********** 2025-03-23 13:50:07.309649 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-23 13:50:07.309663 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309677 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-23 13:50:07.309691 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309705 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-23 13:50:07.309719 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309733 | orchestrator | 2025-03-23 13:50:07.309753 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-03-23 13:50:07.309767 | orchestrator | Sunday 23 March 2025 13:48:31 +0000 (0:00:09.072) 0:03:19.670 ********** 2025-03-23 13:50:07.309782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.309805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.309829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.309851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.309875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 13:50:07.309891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 13:50:07.309912 | orchestrator | 2025-03-23 13:50:07.309927 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-23 13:50:07.309941 | orchestrator | Sunday 23 March 2025 13:48:37 +0000 (0:00:06.353) 0:03:26.023 ********** 2025-03-23 13:50:07.309955 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:50:07.309969 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:50:07.309983 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:50:07.309997 | orchestrator | 2025-03-23 13:50:07.310061 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-03-23 13:50:07.310080 | orchestrator | Sunday 23 March 2025 13:48:38 +0000 (0:00:00.357) 0:03:26.381 ********** 2025-03-23 13:50:07.310094 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310108 | orchestrator | 2025-03-23 13:50:07.310122 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-03-23 13:50:07.310136 | orchestrator | Sunday 23 March 2025 13:48:40 +0000 (0:00:02.655) 0:03:29.036 ********** 2025-03-23 13:50:07.310149 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310164 | orchestrator | 2025-03-23 13:50:07.310178 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-03-23 13:50:07.310191 | orchestrator | Sunday 23 March 2025 13:48:43 +0000 (0:00:02.697) 0:03:31.733 ********** 2025-03-23 13:50:07.310205 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310219 | orchestrator | 2025-03-23 13:50:07.310233 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-03-23 13:50:07.310247 | orchestrator | Sunday 23 March 2025 13:48:46 +0000 (0:00:02.733) 0:03:34.467 ********** 2025-03-23 13:50:07.310261 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310275 | orchestrator | 2025-03-23 13:50:07.310289 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-03-23 13:50:07.310303 | orchestrator | Sunday 23 March 2025 13:49:18 +0000 (0:00:32.495) 0:04:06.962 ********** 2025-03-23 13:50:07.310317 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310331 | orchestrator | 2025-03-23 13:50:07.310345 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-23 13:50:07.310359 | orchestrator | Sunday 23 March 2025 13:49:21 +0000 (0:00:02.456) 0:04:09.419 ********** 2025-03-23 13:50:07.310373 | orchestrator | 2025-03-23 13:50:07.310387 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-23 13:50:07.310400 | orchestrator | Sunday 23 March 2025 13:49:21 +0000 (0:00:00.069) 0:04:09.489 ********** 2025-03-23 13:50:07.310414 | orchestrator | 2025-03-23 13:50:07.310429 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-23 13:50:07.310443 | orchestrator | Sunday 23 March 2025 13:49:21 +0000 (0:00:00.057) 0:04:09.546 ********** 2025-03-23 13:50:07.310456 | orchestrator | 2025-03-23 13:50:07.310471 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-03-23 13:50:07.310485 | orchestrator | Sunday 23 March 2025 13:49:21 +0000 (0:00:00.299) 0:04:09.846 ********** 2025-03-23 13:50:07.310499 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:50:07.310539 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:50:07.310554 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:50:07.310568 | orchestrator | 2025-03-23 13:50:07.310582 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:50:07.310597 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-03-23 13:50:07.310613 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-03-23 13:50:07.310627 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-03-23 13:50:07.310641 | orchestrator | 2025-03-23 13:50:07.310655 | orchestrator | 2025-03-23 13:50:07.310669 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:50:07.310682 | orchestrator | Sunday 23 March 2025 13:50:04 +0000 (0:00:42.895) 0:04:52.741 ********** 2025-03-23 13:50:07.310696 | orchestrator | =============================================================================== 2025-03-23 13:50:07.310709 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.90s 2025-03-23 13:50:07.310723 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.50s 2025-03-23 13:50:07.310737 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 31.09s 2025-03-23 13:50:07.310751 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 16.17s 2025-03-23 13:50:07.310765 | orchestrator | glance : Copying over config.json files for services ------------------- 15.19s 2025-03-23 13:50:07.310778 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 14.20s 2025-03-23 13:50:07.310798 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 12.90s 2025-03-23 13:50:07.310812 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 12.17s 2025-03-23 13:50:07.310826 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 10.14s 2025-03-23 13:50:07.310840 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 9.07s 2025-03-23 13:50:07.310854 | orchestrator | glance : Ensuring config directories exist ------------------------------ 8.94s 2025-03-23 13:50:07.310867 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 8.66s 2025-03-23 13:50:07.310881 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 8.17s 2025-03-23 13:50:07.310894 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.27s 2025-03-23 13:50:07.310908 | orchestrator | glance : Check glance containers ---------------------------------------- 6.35s 2025-03-23 13:50:07.310922 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.05s 2025-03-23 13:50:07.310936 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.94s 2025-03-23 13:50:07.310950 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.74s 2025-03-23 13:50:07.310964 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.52s 2025-03-23 13:50:07.310983 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 4.11s 2025-03-23 13:50:10.361111 | orchestrator | 2025-03-23 13:50:07 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:10.361217 | orchestrator | 2025-03-23 13:50:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:10.361271 | orchestrator | 2025-03-23 13:50:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:10.362534 | orchestrator | 2025-03-23 13:50:10 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:10.363837 | orchestrator | 2025-03-23 13:50:10 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:10.365579 | orchestrator | 2025-03-23 13:50:10 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:10.367139 | orchestrator | 2025-03-23 13:50:10 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:10.367389 | orchestrator | 2025-03-23 13:50:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:13.417990 | orchestrator | 2025-03-23 13:50:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:13.418383 | orchestrator | 2025-03-23 13:50:13 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:13.418966 | orchestrator | 2025-03-23 13:50:13 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:13.419775 | orchestrator | 2025-03-23 13:50:13 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:13.420817 | orchestrator | 2025-03-23 13:50:13 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:16.484838 | orchestrator | 2025-03-23 13:50:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:16.484946 | orchestrator | 2025-03-23 13:50:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:19.528898 | orchestrator | 2025-03-23 13:50:16 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:19.528994 | orchestrator | 2025-03-23 13:50:16 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:19.529012 | orchestrator | 2025-03-23 13:50:16 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:19.529026 | orchestrator | 2025-03-23 13:50:16 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:19.529041 | orchestrator | 2025-03-23 13:50:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:19.529070 | orchestrator | 2025-03-23 13:50:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:19.530713 | orchestrator | 2025-03-23 13:50:19 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:19.534337 | orchestrator | 2025-03-23 13:50:19 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:19.536905 | orchestrator | 2025-03-23 13:50:19 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:19.536938 | orchestrator | 2025-03-23 13:50:19 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:22.575901 | orchestrator | 2025-03-23 13:50:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:22.576029 | orchestrator | 2025-03-23 13:50:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:22.577494 | orchestrator | 2025-03-23 13:50:22 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:22.578385 | orchestrator | 2025-03-23 13:50:22 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:22.579731 | orchestrator | 2025-03-23 13:50:22 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:22.581901 | orchestrator | 2025-03-23 13:50:22 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:25.624433 | orchestrator | 2025-03-23 13:50:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:25.624736 | orchestrator | 2025-03-23 13:50:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:25.625647 | orchestrator | 2025-03-23 13:50:25 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:25.625686 | orchestrator | 2025-03-23 13:50:25 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:25.626445 | orchestrator | 2025-03-23 13:50:25 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:25.627464 | orchestrator | 2025-03-23 13:50:25 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:28.677558 | orchestrator | 2025-03-23 13:50:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:28.677683 | orchestrator | 2025-03-23 13:50:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:28.679071 | orchestrator | 2025-03-23 13:50:28 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state STARTED 2025-03-23 13:50:28.680664 | orchestrator | 2025-03-23 13:50:28 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:28.682875 | orchestrator | 2025-03-23 13:50:28 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:28.685184 | orchestrator | 2025-03-23 13:50:28 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:31.734289 | orchestrator | 2025-03-23 13:50:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:31.734409 | orchestrator | 2025-03-23 13:50:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:31.736390 | orchestrator | 2025-03-23 13:50:31 | INFO  | Task eeb9e4d0-3b7f-4a65-a28d-248e50fa89fd is in state SUCCESS 2025-03-23 13:50:31.737772 | orchestrator | 2025-03-23 13:50:31 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:31.739556 | orchestrator | 2025-03-23 13:50:31 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:31.741136 | orchestrator | 2025-03-23 13:50:31 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:34.793801 | orchestrator | 2025-03-23 13:50:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:34.793926 | orchestrator | 2025-03-23 13:50:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:34.794823 | orchestrator | 2025-03-23 13:50:34 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:34.796423 | orchestrator | 2025-03-23 13:50:34 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:34.798661 | orchestrator | 2025-03-23 13:50:34 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:37.852000 | orchestrator | 2025-03-23 13:50:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:37.852119 | orchestrator | 2025-03-23 13:50:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:37.853213 | orchestrator | 2025-03-23 13:50:37 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:37.854742 | orchestrator | 2025-03-23 13:50:37 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:37.856253 | orchestrator | 2025-03-23 13:50:37 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:37.857014 | orchestrator | 2025-03-23 13:50:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:40.905093 | orchestrator | 2025-03-23 13:50:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:40.906717 | orchestrator | 2025-03-23 13:50:40 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:43.956665 | orchestrator | 2025-03-23 13:50:40 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:43.956766 | orchestrator | 2025-03-23 13:50:40 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:43.956784 | orchestrator | 2025-03-23 13:50:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:43.956835 | orchestrator | 2025-03-23 13:50:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:47.005630 | orchestrator | 2025-03-23 13:50:43 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:47.005738 | orchestrator | 2025-03-23 13:50:43 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:47.005755 | orchestrator | 2025-03-23 13:50:43 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:47.005771 | orchestrator | 2025-03-23 13:50:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:47.005800 | orchestrator | 2025-03-23 13:50:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:47.006787 | orchestrator | 2025-03-23 13:50:47 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:47.009191 | orchestrator | 2025-03-23 13:50:47 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:47.011196 | orchestrator | 2025-03-23 13:50:47 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:50.061127 | orchestrator | 2025-03-23 13:50:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:50.061240 | orchestrator | 2025-03-23 13:50:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:50.062848 | orchestrator | 2025-03-23 13:50:50 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:50.063917 | orchestrator | 2025-03-23 13:50:50 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:50.065200 | orchestrator | 2025-03-23 13:50:50 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:50.065352 | orchestrator | 2025-03-23 13:50:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:53.111223 | orchestrator | 2025-03-23 13:50:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:53.111626 | orchestrator | 2025-03-23 13:50:53 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:53.111672 | orchestrator | 2025-03-23 13:50:53 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:53.112214 | orchestrator | 2025-03-23 13:50:53 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:56.150642 | orchestrator | 2025-03-23 13:50:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:56.150755 | orchestrator | 2025-03-23 13:50:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:56.151569 | orchestrator | 2025-03-23 13:50:56 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:56.152573 | orchestrator | 2025-03-23 13:50:56 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:56.154947 | orchestrator | 2025-03-23 13:50:56 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:59.193539 | orchestrator | 2025-03-23 13:50:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:50:59.193712 | orchestrator | 2025-03-23 13:50:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:50:59.196590 | orchestrator | 2025-03-23 13:50:59 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:50:59.196949 | orchestrator | 2025-03-23 13:50:59 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:50:59.199370 | orchestrator | 2025-03-23 13:50:59 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:50:59.201311 | orchestrator | 2025-03-23 13:50:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:02.245622 | orchestrator | 2025-03-23 13:51:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:02.246243 | orchestrator | 2025-03-23 13:51:02 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:02.247159 | orchestrator | 2025-03-23 13:51:02 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:02.248580 | orchestrator | 2025-03-23 13:51:02 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:05.307354 | orchestrator | 2025-03-23 13:51:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:05.307484 | orchestrator | 2025-03-23 13:51:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:05.310160 | orchestrator | 2025-03-23 13:51:05 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:05.313333 | orchestrator | 2025-03-23 13:51:05 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:05.315286 | orchestrator | 2025-03-23 13:51:05 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:08.359761 | orchestrator | 2025-03-23 13:51:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:08.359865 | orchestrator | 2025-03-23 13:51:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:08.361471 | orchestrator | 2025-03-23 13:51:08 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:08.363191 | orchestrator | 2025-03-23 13:51:08 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:08.365183 | orchestrator | 2025-03-23 13:51:08 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:11.422449 | orchestrator | 2025-03-23 13:51:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:11.422584 | orchestrator | 2025-03-23 13:51:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:11.433990 | orchestrator | 2025-03-23 13:51:11 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:11.435750 | orchestrator | 2025-03-23 13:51:11 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:11.441113 | orchestrator | 2025-03-23 13:51:11 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:14.482310 | orchestrator | 2025-03-23 13:51:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:14.482444 | orchestrator | 2025-03-23 13:51:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:14.483942 | orchestrator | 2025-03-23 13:51:14 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:14.483990 | orchestrator | 2025-03-23 13:51:14 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:14.488009 | orchestrator | 2025-03-23 13:51:14 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:17.534643 | orchestrator | 2025-03-23 13:51:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:17.534757 | orchestrator | 2025-03-23 13:51:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:17.536394 | orchestrator | 2025-03-23 13:51:17 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:17.537791 | orchestrator | 2025-03-23 13:51:17 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:17.538776 | orchestrator | 2025-03-23 13:51:17 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:20.579171 | orchestrator | 2025-03-23 13:51:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:20.579300 | orchestrator | 2025-03-23 13:51:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:20.580661 | orchestrator | 2025-03-23 13:51:20 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:20.583431 | orchestrator | 2025-03-23 13:51:20 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:20.584395 | orchestrator | 2025-03-23 13:51:20 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:23.629959 | orchestrator | 2025-03-23 13:51:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:23.630146 | orchestrator | 2025-03-23 13:51:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:23.631812 | orchestrator | 2025-03-23 13:51:23 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:23.633644 | orchestrator | 2025-03-23 13:51:23 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:23.635715 | orchestrator | 2025-03-23 13:51:23 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:26.684414 | orchestrator | 2025-03-23 13:51:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:26.684586 | orchestrator | 2025-03-23 13:51:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:26.686814 | orchestrator | 2025-03-23 13:51:26 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:26.686846 | orchestrator | 2025-03-23 13:51:26 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:26.688867 | orchestrator | 2025-03-23 13:51:26 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:26.689328 | orchestrator | 2025-03-23 13:51:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:29.735743 | orchestrator | 2025-03-23 13:51:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:29.738168 | orchestrator | 2025-03-23 13:51:29 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:29.739637 | orchestrator | 2025-03-23 13:51:29 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:29.741670 | orchestrator | 2025-03-23 13:51:29 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:32.797139 | orchestrator | 2025-03-23 13:51:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:32.797276 | orchestrator | 2025-03-23 13:51:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:32.799325 | orchestrator | 2025-03-23 13:51:32 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:32.801969 | orchestrator | 2025-03-23 13:51:32 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:32.803857 | orchestrator | 2025-03-23 13:51:32 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state STARTED 2025-03-23 13:51:35.853614 | orchestrator | 2025-03-23 13:51:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:35.853735 | orchestrator | 2025-03-23 13:51:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:35.855358 | orchestrator | 2025-03-23 13:51:35 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:35.857172 | orchestrator | 2025-03-23 13:51:35 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:35.858176 | orchestrator | 2025-03-23 13:51:35 | INFO  | Task 3b744ebc-05c3-414e-a3c0-fa64e831893d is in state SUCCESS 2025-03-23 13:51:38.910109 | orchestrator | 2025-03-23 13:51:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:38.910228 | orchestrator | 2025-03-23 13:51:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:38.911955 | orchestrator | 2025-03-23 13:51:38 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:38.914666 | orchestrator | 2025-03-23 13:51:38 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:41.967249 | orchestrator | 2025-03-23 13:51:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:41.967384 | orchestrator | 2025-03-23 13:51:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:41.969302 | orchestrator | 2025-03-23 13:51:41 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:41.972265 | orchestrator | 2025-03-23 13:51:41 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:45.059157 | orchestrator | 2025-03-23 13:51:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:45.059278 | orchestrator | 2025-03-23 13:51:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:45.059697 | orchestrator | 2025-03-23 13:51:45 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:45.061075 | orchestrator | 2025-03-23 13:51:45 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:48.109974 | orchestrator | 2025-03-23 13:51:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:48.110157 | orchestrator | 2025-03-23 13:51:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:48.113296 | orchestrator | 2025-03-23 13:51:48 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:48.113660 | orchestrator | 2025-03-23 13:51:48 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:51.158781 | orchestrator | 2025-03-23 13:51:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:51.158901 | orchestrator | 2025-03-23 13:51:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:54.202618 | orchestrator | 2025-03-23 13:51:51 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:54.202720 | orchestrator | 2025-03-23 13:51:51 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:54.202737 | orchestrator | 2025-03-23 13:51:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:54.202802 | orchestrator | 2025-03-23 13:51:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:54.205157 | orchestrator | 2025-03-23 13:51:54 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:54.209835 | orchestrator | 2025-03-23 13:51:54 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:51:57.264903 | orchestrator | 2025-03-23 13:51:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:51:57.265043 | orchestrator | 2025-03-23 13:51:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:51:57.266818 | orchestrator | 2025-03-23 13:51:57 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:51:57.268731 | orchestrator | 2025-03-23 13:51:57 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:52:00.337870 | orchestrator | 2025-03-23 13:51:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:00.338002 | orchestrator | 2025-03-23 13:52:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:00.341002 | orchestrator | 2025-03-23 13:52:00 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:00.342419 | orchestrator | 2025-03-23 13:52:00 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:52:03.396631 | orchestrator | 2025-03-23 13:52:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:03.396754 | orchestrator | 2025-03-23 13:52:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:03.397588 | orchestrator | 2025-03-23 13:52:03 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:03.398844 | orchestrator | 2025-03-23 13:52:03 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state STARTED 2025-03-23 13:52:06.455448 | orchestrator | 2025-03-23 13:52:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:06.455629 | orchestrator | 2025-03-23 13:52:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:06.458806 | orchestrator | 2025-03-23 13:52:06 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:06.458854 | orchestrator | 2025-03-23 13:52:06.458868 | orchestrator | 2025-03-23 13:52:06.458881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:52:06.458894 | orchestrator | 2025-03-23 13:52:06.458906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:52:06.458919 | orchestrator | Sunday 23 March 2025 13:49:28 +0000 (0:00:01.002) 0:00:01.002 ********** 2025-03-23 13:52:06.458931 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.458945 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:52:06.458958 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:52:06.458970 | orchestrator | 2025-03-23 13:52:06.458983 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:52:06.458995 | orchestrator | Sunday 23 March 2025 13:49:29 +0000 (0:00:01.114) 0:00:02.117 ********** 2025-03-23 13:52:06.459008 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-03-23 13:52:06.459020 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-03-23 13:52:06.459033 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-03-23 13:52:06.459045 | orchestrator | 2025-03-23 13:52:06.459057 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-03-23 13:52:06.459069 | orchestrator | 2025-03-23 13:52:06.459081 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-03-23 13:52:06.459094 | orchestrator | Sunday 23 March 2025 13:49:30 +0000 (0:00:00.776) 0:00:02.894 ********** 2025-03-23 13:52:06.459129 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:52:06.459144 | orchestrator | 2025-03-23 13:52:06.459319 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-03-23 13:52:06.459716 | orchestrator | Sunday 23 March 2025 13:49:31 +0000 (0:00:01.279) 0:00:04.173 ********** 2025-03-23 13:52:06.459732 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-03-23 13:52:06.459746 | orchestrator | 2025-03-23 13:52:06.459759 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-03-23 13:52:06.459772 | orchestrator | Sunday 23 March 2025 13:49:35 +0000 (0:00:04.258) 0:00:08.432 ********** 2025-03-23 13:52:06.459786 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-03-23 13:52:06.459799 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-03-23 13:52:06.459812 | orchestrator | 2025-03-23 13:52:06.459825 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-03-23 13:52:06.459839 | orchestrator | Sunday 23 March 2025 13:49:43 +0000 (0:00:07.839) 0:00:16.271 ********** 2025-03-23 13:52:06.459852 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:52:06.459865 | orchestrator | 2025-03-23 13:52:06.459878 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-03-23 13:52:06.459891 | orchestrator | Sunday 23 March 2025 13:49:47 +0000 (0:00:04.076) 0:00:20.348 ********** 2025-03-23 13:52:06.459904 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:52:06.459917 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-03-23 13:52:06.460204 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-03-23 13:52:06.460226 | orchestrator | 2025-03-23 13:52:06.460239 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-03-23 13:52:06.460265 | orchestrator | Sunday 23 March 2025 13:49:56 +0000 (0:00:08.872) 0:00:29.220 ********** 2025-03-23 13:52:06.460278 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:52:06.460290 | orchestrator | 2025-03-23 13:52:06.460303 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-03-23 13:52:06.460315 | orchestrator | Sunday 23 March 2025 13:50:00 +0000 (0:00:03.921) 0:00:33.142 ********** 2025-03-23 13:52:06.460327 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-03-23 13:52:06.460340 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-03-23 13:52:06.460352 | orchestrator | 2025-03-23 13:52:06.460364 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-03-23 13:52:06.460376 | orchestrator | Sunday 23 March 2025 13:50:07 +0000 (0:00:07.443) 0:00:40.585 ********** 2025-03-23 13:52:06.460389 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-03-23 13:52:06.460401 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-03-23 13:52:06.460413 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-03-23 13:52:06.460425 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-03-23 13:52:06.460438 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-03-23 13:52:06.460450 | orchestrator | 2025-03-23 13:52:06.460462 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-03-23 13:52:06.460505 | orchestrator | Sunday 23 March 2025 13:50:25 +0000 (0:00:17.935) 0:00:58.521 ********** 2025-03-23 13:52:06.460523 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:52:06.460537 | orchestrator | 2025-03-23 13:52:06.460555 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-03-23 13:52:06.460568 | orchestrator | Sunday 23 March 2025 13:50:26 +0000 (0:00:00.932) 0:00:59.454 ********** 2025-03-23 13:52:06.460628 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-03-23 13:52:06.460647 | orchestrator | 2025-03-23 13:52:06.460660 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:52:06.460677 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.460692 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.460704 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.460717 | orchestrator | 2025-03-23 13:52:06.460729 | orchestrator | 2025-03-23 13:52:06.460741 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:52:06.460754 | orchestrator | Sunday 23 March 2025 13:50:30 +0000 (0:00:03.641) 0:01:03.095 ********** 2025-03-23 13:52:06.460766 | orchestrator | =============================================================================== 2025-03-23 13:52:06.460778 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.94s 2025-03-23 13:52:06.460790 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.87s 2025-03-23 13:52:06.460803 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.84s 2025-03-23 13:52:06.460815 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.44s 2025-03-23 13:52:06.460827 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 4.26s 2025-03-23 13:52:06.460839 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 4.08s 2025-03-23 13:52:06.460852 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.92s 2025-03-23 13:52:06.460866 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.64s 2025-03-23 13:52:06.460880 | orchestrator | octavia : include_tasks ------------------------------------------------- 1.28s 2025-03-23 13:52:06.460894 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2025-03-23 13:52:06.460907 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.93s 2025-03-23 13:52:06.460921 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-03-23 13:52:06.460934 | orchestrator | 2025-03-23 13:52:06.460948 | orchestrator | 2025-03-23 13:52:06.460961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:52:06.460975 | orchestrator | 2025-03-23 13:52:06.460988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:52:06.461001 | orchestrator | Sunday 23 March 2025 13:49:08 +0000 (0:00:00.210) 0:00:00.210 ********** 2025-03-23 13:52:06.461015 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.461029 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:52:06.461043 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:52:06.461057 | orchestrator | 2025-03-23 13:52:06.461070 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:52:06.461084 | orchestrator | Sunday 23 March 2025 13:49:08 +0000 (0:00:00.382) 0:00:00.593 ********** 2025-03-23 13:52:06.461098 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-03-23 13:52:06.461112 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-03-23 13:52:06.461125 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-03-23 13:52:06.461139 | orchestrator | 2025-03-23 13:52:06.461152 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-03-23 13:52:06.461179 | orchestrator | 2025-03-23 13:52:06.461192 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-03-23 13:52:06.461210 | orchestrator | Sunday 23 March 2025 13:49:08 +0000 (0:00:00.549) 0:00:01.142 ********** 2025-03-23 13:52:06.461223 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.461236 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:52:06.461249 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:52:06.461268 | orchestrator | 2025-03-23 13:52:06.461281 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:52:06.461294 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.461307 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.461319 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:52:06.461331 | orchestrator | 2025-03-23 13:52:06.461343 | orchestrator | 2025-03-23 13:52:06.461356 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:52:06.461368 | orchestrator | Sunday 23 March 2025 13:51:34 +0000 (0:02:25.375) 0:02:26.518 ********** 2025-03-23 13:52:06.461381 | orchestrator | =============================================================================== 2025-03-23 13:52:06.461393 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 145.38s 2025-03-23 13:52:06.461405 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-03-23 13:52:06.461417 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-03-23 13:52:06.461430 | orchestrator | 2025-03-23 13:52:06.461442 | orchestrator | 2025-03-23 13:52:06.461454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:52:06.461467 | orchestrator | 2025-03-23 13:52:06.461529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:52:06.461570 | orchestrator | Sunday 23 March 2025 13:50:08 +0000 (0:00:00.364) 0:00:00.364 ********** 2025-03-23 13:52:06.461584 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.461598 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:52:06.461611 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:52:06.461624 | orchestrator | 2025-03-23 13:52:06.461636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:52:06.461648 | orchestrator | Sunday 23 March 2025 13:50:09 +0000 (0:00:00.666) 0:00:01.030 ********** 2025-03-23 13:52:06.461660 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-03-23 13:52:06.461673 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-03-23 13:52:06.461685 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-03-23 13:52:06.461698 | orchestrator | 2025-03-23 13:52:06.461710 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-03-23 13:52:06.461722 | orchestrator | 2025-03-23 13:52:06.461735 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-03-23 13:52:06.461747 | orchestrator | Sunday 23 March 2025 13:50:09 +0000 (0:00:00.611) 0:00:01.642 ********** 2025-03-23 13:52:06.461759 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:52:06.461772 | orchestrator | 2025-03-23 13:52:06.461784 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-03-23 13:52:06.461796 | orchestrator | Sunday 23 March 2025 13:50:11 +0000 (0:00:01.408) 0:00:03.051 ********** 2025-03-23 13:52:06.461809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.461836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.461850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.461863 | orchestrator | 2025-03-23 13:52:06.461876 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-03-23 13:52:06.461888 | orchestrator | Sunday 23 March 2025 13:50:12 +0000 (0:00:01.184) 0:00:04.236 ********** 2025-03-23 13:52:06.461900 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-03-23 13:52:06.461918 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-03-23 13:52:06.461931 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:52:06.461943 | orchestrator | 2025-03-23 13:52:06.461956 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-03-23 13:52:06.461968 | orchestrator | Sunday 23 March 2025 13:50:13 +0000 (0:00:00.574) 0:00:04.811 ********** 2025-03-23 13:52:06.461980 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:52:06.461992 | orchestrator | 2025-03-23 13:52:06.462005 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-03-23 13:52:06.462061 | orchestrator | Sunday 23 March 2025 13:50:13 +0000 (0:00:00.759) 0:00:05.570 ********** 2025-03-23 13:52:06.462107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462157 | orchestrator | 2025-03-23 13:52:06.462169 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-03-23 13:52:06.462182 | orchestrator | Sunday 23 March 2025 13:50:15 +0000 (0:00:02.080) 0:00:07.651 ********** 2025-03-23 13:52:06.462194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462220 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:06.462232 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.462269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462283 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.462296 | orchestrator | 2025-03-23 13:52:06.462308 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-03-23 13:52:06.462320 | orchestrator | Sunday 23 March 2025 13:50:16 +0000 (0:00:00.632) 0:00:08.283 ********** 2025-03-23 13:52:06.462333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462352 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:06.462365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462378 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.462392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 13:52:06.462405 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.462417 | orchestrator | 2025-03-23 13:52:06.462429 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-03-23 13:52:06.462442 | orchestrator | Sunday 23 March 2025 13:50:17 +0000 (0:00:00.884) 0:00:09.168 ********** 2025-03-23 13:52:06.462454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462585 | orchestrator | 2025-03-23 13:52:06.462598 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-03-23 13:52:06.462615 | orchestrator | Sunday 23 March 2025 13:50:19 +0000 (0:00:01.583) 0:00:10.751 ********** 2025-03-23 13:52:06.462628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.462668 | orchestrator | 2025-03-23 13:52:06.462680 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-03-23 13:52:06.462693 | orchestrator | Sunday 23 March 2025 13:50:21 +0000 (0:00:02.118) 0:00:12.870 ********** 2025-03-23 13:52:06.462705 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:06.462718 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.462731 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.462743 | orchestrator | 2025-03-23 13:52:06.462755 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-03-23 13:52:06.462768 | orchestrator | Sunday 23 March 2025 13:50:21 +0000 (0:00:00.315) 0:00:13.185 ********** 2025-03-23 13:52:06.462780 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-23 13:52:06.462797 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-23 13:52:06.462810 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-23 13:52:06.462822 | orchestrator | 2025-03-23 13:52:06.462835 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-03-23 13:52:06.462847 | orchestrator | Sunday 23 March 2025 13:50:22 +0000 (0:00:01.447) 0:00:14.633 ********** 2025-03-23 13:52:06.462867 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-23 13:52:06.462879 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-23 13:52:06.462892 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-23 13:52:06.462904 | orchestrator | 2025-03-23 13:52:06.462939 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-03-23 13:52:06.462954 | orchestrator | Sunday 23 March 2025 13:50:24 +0000 (0:00:01.598) 0:00:16.231 ********** 2025-03-23 13:52:06.462966 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:52:06.462978 | orchestrator | 2025-03-23 13:52:06.462990 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-03-23 13:52:06.463003 | orchestrator | Sunday 23 March 2025 13:50:25 +0000 (0:00:00.502) 0:00:16.734 ********** 2025-03-23 13:52:06.463015 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-03-23 13:52:06.463027 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-03-23 13:52:06.463040 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.463052 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:52:06.463065 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:52:06.463077 | orchestrator | 2025-03-23 13:52:06.463089 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-03-23 13:52:06.463102 | orchestrator | Sunday 23 March 2025 13:50:26 +0000 (0:00:01.052) 0:00:17.787 ********** 2025-03-23 13:52:06.463114 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:06.463126 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.463139 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.463151 | orchestrator | 2025-03-23 13:52:06.463163 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-03-23 13:52:06.463176 | orchestrator | Sunday 23 March 2025 13:50:26 +0000 (0:00:00.528) 0:00:18.315 ********** 2025-03-23 13:52:06.463199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091322, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091322, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091322, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091312, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091312, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091312, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091307, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091307, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091307, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091317, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091317, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-detai2025-03-23 13:52:06 | INFO  | Task 8dbb8f44-179d-41d3-9d3c-5e57ed6d3ea8 is in state SUCCESS 2025-03-23 13:52:06.463435 | orchestrator | ls.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091317, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091292, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091292, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091292, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091310, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091310, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091310, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091316, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091316, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091316, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091289, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091289, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091289, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091277, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1208565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091277, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1208565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091277, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1208565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091297, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091297, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091297, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091279, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091279, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091279, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1091314, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1091314, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1091314, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1091302, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1091302, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1091302, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091320, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091320, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091320, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1258566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.463981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091285, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091285, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091285, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091311, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091311, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091311, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1248567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091278, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091278, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091278, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1218567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091282, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091282, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091282, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1228566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091306, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091306, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091306, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1238565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1091365, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1428568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1091365, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1428568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1091365, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1428568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091359, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091359, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.12982025-03-23 13:52:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:06.464320 | orchestrator | 568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091359, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1091401, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.144857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1091401, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.144857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1091401, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.144857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091328, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091328, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091328, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1268568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1091407, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1091407, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1091407, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1091393, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1091393, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1091393, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1091396, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1091396, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1091396, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091329, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091329, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091329, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091363, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091363, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091363, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1091410, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1091410, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1091410, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1458569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1091399, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1091399, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1091399, 'dev': 222, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1742734228.1438568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091334, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091334, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091334, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091332, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091332, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091332, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1278567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091341, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1288567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091341, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1288567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091341, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1288567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091347, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091347, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091347, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1298568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.464992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1091413, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.465002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1091413, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.465013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1091413, 'dev': 222, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1742734228.1468568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-23 13:52:06.465029 | orchestrator | 2025-03-23 13:52:06.465039 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-03-23 13:52:06.465049 | orchestrator | Sunday 23 March 2025 13:51:03 +0000 (0:00:36.810) 0:00:55.126 ********** 2025-03-23 13:52:06.465065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.465076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.465095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 13:52:06.465106 | orchestrator | 2025-03-23 13:52:06.465116 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-03-23 13:52:06.465126 | orchestrator | Sunday 23 March 2025 13:51:04 +0000 (0:00:01.175) 0:00:56.301 ********** 2025-03-23 13:52:06.465136 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:52:06.465146 | orchestrator | 2025-03-23 13:52:06.465156 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-03-23 13:52:06.465167 | orchestrator | Sunday 23 March 2025 13:51:07 +0000 (0:00:02.919) 0:00:59.221 ********** 2025-03-23 13:52:06.465177 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:52:06.465187 | orchestrator | 2025-03-23 13:52:06.465197 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-23 13:52:06.465207 | orchestrator | Sunday 23 March 2025 13:51:10 +0000 (0:00:02.603) 0:01:01.825 ********** 2025-03-23 13:52:06.465216 | orchestrator | 2025-03-23 13:52:06.465226 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-23 13:52:06.465237 | orchestrator | Sunday 23 March 2025 13:51:10 +0000 (0:00:00.079) 0:01:01.904 ********** 2025-03-23 13:52:06.465247 | orchestrator | 2025-03-23 13:52:06.465257 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-23 13:52:06.465267 | orchestrator | Sunday 23 March 2025 13:51:10 +0000 (0:00:00.067) 0:01:01.972 ********** 2025-03-23 13:52:06.465277 | orchestrator | 2025-03-23 13:52:06.465287 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-03-23 13:52:06.465296 | orchestrator | Sunday 23 March 2025 13:51:10 +0000 (0:00:00.246) 0:01:02.218 ********** 2025-03-23 13:52:06.465306 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.465322 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.465332 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:52:06.465342 | orchestrator | 2025-03-23 13:52:06.465352 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-03-23 13:52:06.465362 | orchestrator | Sunday 23 March 2025 13:51:12 +0000 (0:00:02.100) 0:01:04.319 ********** 2025-03-23 13:52:06.465372 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:06.465381 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:06.465392 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-03-23 13:52:06.465402 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-03-23 13:52:06.465412 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:06.465422 | orchestrator | 2025-03-23 13:52:06.465437 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-03-23 13:52:06.465447 | orchestrator | Sunday 23 March 2025 13:51:40 +0000 (0:00:27.526) 0:01:31.845 ********** 2025-03-23 13:52:06.465458 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:06.465468 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:52:06.465493 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:52:06.465504 | orchestrator | 2025-03-23 13:52:06.465514 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-03-23 13:52:06.465524 | orchestrator | Sunday 23 March 2025 13:51:56 +0000 (0:00:16.403) 0:01:48.249 ********** 2025-03-23 13:52:06.465538 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:52:09.522201 | orchestrator | 2025-03-23 13:52:09.522299 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-03-23 13:52:09.522316 | orchestrator | Sunday 23 March 2025 13:51:59 +0000 (0:00:02.635) 0:01:50.884 ********** 2025-03-23 13:52:09.522329 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:09.522342 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:52:09.522355 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:52:09.522368 | orchestrator | 2025-03-23 13:52:09.522380 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-03-23 13:52:09.522393 | orchestrator | Sunday 23 March 2025 13:51:59 +0000 (0:00:00.823) 0:01:51.707 ********** 2025-03-23 13:52:09.522407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-03-23 13:52:09.522424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-03-23 13:52:09.522438 | orchestrator | 2025-03-23 13:52:09.522451 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-03-23 13:52:09.522463 | orchestrator | Sunday 23 March 2025 13:52:02 +0000 (0:00:02.939) 0:01:54.647 ********** 2025-03-23 13:52:09.522509 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:52:09.522523 | orchestrator | 2025-03-23 13:52:09.522535 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:52:09.522549 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:52:09.522563 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:52:09.522575 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-23 13:52:09.522588 | orchestrator | 2025-03-23 13:52:09.522625 | orchestrator | 2025-03-23 13:52:09.522638 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:52:09.522651 | orchestrator | Sunday 23 March 2025 13:52:03 +0000 (0:00:00.545) 0:01:55.193 ********** 2025-03-23 13:52:09.522663 | orchestrator | =============================================================================== 2025-03-23 13:52:09.522676 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.81s 2025-03-23 13:52:09.522688 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.53s 2025-03-23 13:52:09.522701 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 16.40s 2025-03-23 13:52:09.522713 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.94s 2025-03-23 13:52:09.522726 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.92s 2025-03-23 13:52:09.522738 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.64s 2025-03-23 13:52:09.522753 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.60s 2025-03-23 13:52:09.522767 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.12s 2025-03-23 13:52:09.522781 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.10s 2025-03-23 13:52:09.522795 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.08s 2025-03-23 13:52:09.522809 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.60s 2025-03-23 13:52:09.522823 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.58s 2025-03-23 13:52:09.522837 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.45s 2025-03-23 13:52:09.522851 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.41s 2025-03-23 13:52:09.522865 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.18s 2025-03-23 13:52:09.522879 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.18s 2025-03-23 13:52:09.522893 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.05s 2025-03-23 13:52:09.522920 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.88s 2025-03-23 13:52:09.522934 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.82s 2025-03-23 13:52:09.522948 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2025-03-23 13:52:09.522976 | orchestrator | 2025-03-23 13:52:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:09.523774 | orchestrator | 2025-03-23 13:52:09 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:09.524189 | orchestrator | 2025-03-23 13:52:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:12.568662 | orchestrator | 2025-03-23 13:52:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:12.571076 | orchestrator | 2025-03-23 13:52:12 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:12.571502 | orchestrator | 2025-03-23 13:52:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:15.638800 | orchestrator | 2025-03-23 13:52:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:15.639020 | orchestrator | 2025-03-23 13:52:15 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:15.639526 | orchestrator | 2025-03-23 13:52:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:18.699347 | orchestrator | 2025-03-23 13:52:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:18.701126 | orchestrator | 2025-03-23 13:52:18 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:18.701634 | orchestrator | 2025-03-23 13:52:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:21.751871 | orchestrator | 2025-03-23 13:52:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:24.806087 | orchestrator | 2025-03-23 13:52:21 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:24.806209 | orchestrator | 2025-03-23 13:52:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:24.806244 | orchestrator | 2025-03-23 13:52:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:24.810120 | orchestrator | 2025-03-23 13:52:24 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:27.857794 | orchestrator | 2025-03-23 13:52:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:27.857917 | orchestrator | 2025-03-23 13:52:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:27.858461 | orchestrator | 2025-03-23 13:52:27 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:30.900608 | orchestrator | 2025-03-23 13:52:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:30.900741 | orchestrator | 2025-03-23 13:52:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:30.901324 | orchestrator | 2025-03-23 13:52:30 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:33.948785 | orchestrator | 2025-03-23 13:52:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:33.948904 | orchestrator | 2025-03-23 13:52:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:33.949363 | orchestrator | 2025-03-23 13:52:33 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:36.991738 | orchestrator | 2025-03-23 13:52:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:36.991863 | orchestrator | 2025-03-23 13:52:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:40.088716 | orchestrator | 2025-03-23 13:52:36 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:40.088826 | orchestrator | 2025-03-23 13:52:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:40.088861 | orchestrator | 2025-03-23 13:52:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:43.154535 | orchestrator | 2025-03-23 13:52:40 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:43.154651 | orchestrator | 2025-03-23 13:52:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:43.154687 | orchestrator | 2025-03-23 13:52:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:43.155690 | orchestrator | 2025-03-23 13:52:43 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:43.156155 | orchestrator | 2025-03-23 13:52:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:46.204877 | orchestrator | 2025-03-23 13:52:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:46.205243 | orchestrator | 2025-03-23 13:52:46 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:49.241013 | orchestrator | 2025-03-23 13:52:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:49.241121 | orchestrator | 2025-03-23 13:52:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:49.242344 | orchestrator | 2025-03-23 13:52:49 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:52.290741 | orchestrator | 2025-03-23 13:52:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:52.290868 | orchestrator | 2025-03-23 13:52:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:52.292172 | orchestrator | 2025-03-23 13:52:52 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:55.333530 | orchestrator | 2025-03-23 13:52:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:55.333660 | orchestrator | 2025-03-23 13:52:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:55.334110 | orchestrator | 2025-03-23 13:52:55 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:58.375731 | orchestrator | 2025-03-23 13:52:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:52:58.375879 | orchestrator | 2025-03-23 13:52:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:52:58.376951 | orchestrator | 2025-03-23 13:52:58 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:52:58.377062 | orchestrator | 2025-03-23 13:52:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:01.418715 | orchestrator | 2025-03-23 13:53:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:01.419320 | orchestrator | 2025-03-23 13:53:01 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:01.419648 | orchestrator | 2025-03-23 13:53:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:04.473627 | orchestrator | 2025-03-23 13:53:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:04.475847 | orchestrator | 2025-03-23 13:53:04 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:07.547519 | orchestrator | 2025-03-23 13:53:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:07.547647 | orchestrator | 2025-03-23 13:53:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:07.549327 | orchestrator | 2025-03-23 13:53:07 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:10.599946 | orchestrator | 2025-03-23 13:53:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:10.600068 | orchestrator | 2025-03-23 13:53:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:13.643729 | orchestrator | 2025-03-23 13:53:10 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:13.643830 | orchestrator | 2025-03-23 13:53:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:13.643865 | orchestrator | 2025-03-23 13:53:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:13.647943 | orchestrator | 2025-03-23 13:53:13 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:13.648268 | orchestrator | 2025-03-23 13:53:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:16.716111 | orchestrator | 2025-03-23 13:53:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:16.718414 | orchestrator | 2025-03-23 13:53:16 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:19.774771 | orchestrator | 2025-03-23 13:53:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:19.774888 | orchestrator | 2025-03-23 13:53:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:19.779334 | orchestrator | 2025-03-23 13:53:19 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:19.779846 | orchestrator | 2025-03-23 13:53:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:22.827073 | orchestrator | 2025-03-23 13:53:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:22.828683 | orchestrator | 2025-03-23 13:53:22 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:25.888641 | orchestrator | 2025-03-23 13:53:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:25.888763 | orchestrator | 2025-03-23 13:53:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:25.889308 | orchestrator | 2025-03-23 13:53:25 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:28.934265 | orchestrator | 2025-03-23 13:53:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:28.934380 | orchestrator | 2025-03-23 13:53:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:28.934941 | orchestrator | 2025-03-23 13:53:28 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:31.983843 | orchestrator | 2025-03-23 13:53:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:31.983968 | orchestrator | 2025-03-23 13:53:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:31.986106 | orchestrator | 2025-03-23 13:53:31 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:31.986261 | orchestrator | 2025-03-23 13:53:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:35.042270 | orchestrator | 2025-03-23 13:53:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:35.044079 | orchestrator | 2025-03-23 13:53:35 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:38.100692 | orchestrator | 2025-03-23 13:53:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:38.100818 | orchestrator | 2025-03-23 13:53:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:41.147717 | orchestrator | 2025-03-23 13:53:38 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:41.147829 | orchestrator | 2025-03-23 13:53:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:41.147862 | orchestrator | 2025-03-23 13:53:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:41.148759 | orchestrator | 2025-03-23 13:53:41 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:44.198908 | orchestrator | 2025-03-23 13:53:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:44.199030 | orchestrator | 2025-03-23 13:53:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:44.206220 | orchestrator | 2025-03-23 13:53:44 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:47.259665 | orchestrator | 2025-03-23 13:53:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:47.259799 | orchestrator | 2025-03-23 13:53:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:47.261078 | orchestrator | 2025-03-23 13:53:47 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:47.261185 | orchestrator | 2025-03-23 13:53:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:50.311613 | orchestrator | 2025-03-23 13:53:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:50.312349 | orchestrator | 2025-03-23 13:53:50 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:50.312546 | orchestrator | 2025-03-23 13:53:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:53.362577 | orchestrator | 2025-03-23 13:53:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:53.363666 | orchestrator | 2025-03-23 13:53:53 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:56.429542 | orchestrator | 2025-03-23 13:53:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:56.429691 | orchestrator | 2025-03-23 13:53:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:56.430442 | orchestrator | 2025-03-23 13:53:56 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:53:56.430591 | orchestrator | 2025-03-23 13:53:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:53:59.499009 | orchestrator | 2025-03-23 13:53:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:53:59.501873 | orchestrator | 2025-03-23 13:53:59 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:02.549698 | orchestrator | 2025-03-23 13:53:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:02.549832 | orchestrator | 2025-03-23 13:54:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:02.550324 | orchestrator | 2025-03-23 13:54:02 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:02.551108 | orchestrator | 2025-03-23 13:54:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:05.593672 | orchestrator | 2025-03-23 13:54:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:08.633159 | orchestrator | 2025-03-23 13:54:05 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:08.633264 | orchestrator | 2025-03-23 13:54:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:08.633297 | orchestrator | 2025-03-23 13:54:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:08.634808 | orchestrator | 2025-03-23 13:54:08 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:11.681326 | orchestrator | 2025-03-23 13:54:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:11.681518 | orchestrator | 2025-03-23 13:54:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:11.684652 | orchestrator | 2025-03-23 13:54:11 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:14.729937 | orchestrator | 2025-03-23 13:54:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:14.730118 | orchestrator | 2025-03-23 13:54:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:14.730813 | orchestrator | 2025-03-23 13:54:14 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:17.772057 | orchestrator | 2025-03-23 13:54:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:17.772184 | orchestrator | 2025-03-23 13:54:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:17.772932 | orchestrator | 2025-03-23 13:54:17 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:17.773317 | orchestrator | 2025-03-23 13:54:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:20.835704 | orchestrator | 2025-03-23 13:54:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:20.837900 | orchestrator | 2025-03-23 13:54:20 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:23.912945 | orchestrator | 2025-03-23 13:54:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:23.913074 | orchestrator | 2025-03-23 13:54:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:23.914367 | orchestrator | 2025-03-23 13:54:23 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:26.957480 | orchestrator | 2025-03-23 13:54:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:26.957609 | orchestrator | 2025-03-23 13:54:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:26.958134 | orchestrator | 2025-03-23 13:54:26 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:30.004095 | orchestrator | 2025-03-23 13:54:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:30.004224 | orchestrator | 2025-03-23 13:54:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:30.005692 | orchestrator | 2025-03-23 13:54:30 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:33.048990 | orchestrator | 2025-03-23 13:54:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:33.049111 | orchestrator | 2025-03-23 13:54:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:33.049904 | orchestrator | 2025-03-23 13:54:33 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:36.095935 | orchestrator | 2025-03-23 13:54:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:36.096057 | orchestrator | 2025-03-23 13:54:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:36.097693 | orchestrator | 2025-03-23 13:54:36 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:39.145059 | orchestrator | 2025-03-23 13:54:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:39.145195 | orchestrator | 2025-03-23 13:54:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:39.146752 | orchestrator | 2025-03-23 13:54:39 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:42.188786 | orchestrator | 2025-03-23 13:54:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:42.188913 | orchestrator | 2025-03-23 13:54:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:42.189870 | orchestrator | 2025-03-23 13:54:42 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:45.233397 | orchestrator | 2025-03-23 13:54:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:45.233577 | orchestrator | 2025-03-23 13:54:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:45.234134 | orchestrator | 2025-03-23 13:54:45 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:48.309511 | orchestrator | 2025-03-23 13:54:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:48.309641 | orchestrator | 2025-03-23 13:54:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:48.312662 | orchestrator | 2025-03-23 13:54:48 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:51.375397 | orchestrator | 2025-03-23 13:54:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:51.375575 | orchestrator | 2025-03-23 13:54:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:51.376011 | orchestrator | 2025-03-23 13:54:51 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:54.417546 | orchestrator | 2025-03-23 13:54:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:54.417665 | orchestrator | 2025-03-23 13:54:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:54:54.417890 | orchestrator | 2025-03-23 13:54:54 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:54:54.418301 | orchestrator | 2025-03-23 13:54:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:54:57.457522 | orchestrator | 2025-03-23 13:54:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:00.505509 | orchestrator | 2025-03-23 13:54:57 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:00.505607 | orchestrator | 2025-03-23 13:54:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:00.505636 | orchestrator | 2025-03-23 13:55:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:00.508730 | orchestrator | 2025-03-23 13:55:00 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:03.551624 | orchestrator | 2025-03-23 13:55:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:03.551742 | orchestrator | 2025-03-23 13:55:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:03.552273 | orchestrator | 2025-03-23 13:55:03 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:06.590114 | orchestrator | 2025-03-23 13:55:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:06.590229 | orchestrator | 2025-03-23 13:55:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:09.663339 | orchestrator | 2025-03-23 13:55:06 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:09.663490 | orchestrator | 2025-03-23 13:55:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:09.663530 | orchestrator | 2025-03-23 13:55:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:12.696208 | orchestrator | 2025-03-23 13:55:09 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:12.696324 | orchestrator | 2025-03-23 13:55:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:12.696362 | orchestrator | 2025-03-23 13:55:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:15.732155 | orchestrator | 2025-03-23 13:55:12 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:15.732266 | orchestrator | 2025-03-23 13:55:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:15.732303 | orchestrator | 2025-03-23 13:55:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:15.732786 | orchestrator | 2025-03-23 13:55:15 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:18.778661 | orchestrator | 2025-03-23 13:55:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:18.778827 | orchestrator | 2025-03-23 13:55:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:18.781403 | orchestrator | 2025-03-23 13:55:18 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:21.837661 | orchestrator | 2025-03-23 13:55:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:21.837829 | orchestrator | 2025-03-23 13:55:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:21.840037 | orchestrator | 2025-03-23 13:55:21 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:24.884704 | orchestrator | 2025-03-23 13:55:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:24.884988 | orchestrator | 2025-03-23 13:55:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:24.885023 | orchestrator | 2025-03-23 13:55:24 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:27.922236 | orchestrator | 2025-03-23 13:55:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:27.922360 | orchestrator | 2025-03-23 13:55:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:27.923399 | orchestrator | 2025-03-23 13:55:27 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:31.006903 | orchestrator | 2025-03-23 13:55:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:31.007076 | orchestrator | 2025-03-23 13:55:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:31.007652 | orchestrator | 2025-03-23 13:55:31 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:34.052900 | orchestrator | 2025-03-23 13:55:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:34.053023 | orchestrator | 2025-03-23 13:55:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:34.057243 | orchestrator | 2025-03-23 13:55:34 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:37.104479 | orchestrator | 2025-03-23 13:55:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:37.104615 | orchestrator | 2025-03-23 13:55:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:40.152634 | orchestrator | 2025-03-23 13:55:37 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:40.152751 | orchestrator | 2025-03-23 13:55:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:40.152794 | orchestrator | 2025-03-23 13:55:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:40.153712 | orchestrator | 2025-03-23 13:55:40 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:43.198322 | orchestrator | 2025-03-23 13:55:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:43.198503 | orchestrator | 2025-03-23 13:55:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:43.199555 | orchestrator | 2025-03-23 13:55:43 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:46.245673 | orchestrator | 2025-03-23 13:55:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:46.245803 | orchestrator | 2025-03-23 13:55:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:46.246312 | orchestrator | 2025-03-23 13:55:46 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:49.293998 | orchestrator | 2025-03-23 13:55:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:49.294190 | orchestrator | 2025-03-23 13:55:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:49.294930 | orchestrator | 2025-03-23 13:55:49 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:52.344474 | orchestrator | 2025-03-23 13:55:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:52.344626 | orchestrator | 2025-03-23 13:55:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:52.347012 | orchestrator | 2025-03-23 13:55:52 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:55.386654 | orchestrator | 2025-03-23 13:55:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:55.386781 | orchestrator | 2025-03-23 13:55:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:58.435567 | orchestrator | 2025-03-23 13:55:55 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:55:58.435682 | orchestrator | 2025-03-23 13:55:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:55:58.435718 | orchestrator | 2025-03-23 13:55:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:55:58.437664 | orchestrator | 2025-03-23 13:55:58 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:01.480965 | orchestrator | 2025-03-23 13:55:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:01.481086 | orchestrator | 2025-03-23 13:56:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:01.482665 | orchestrator | 2025-03-23 13:56:01 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:01.483113 | orchestrator | 2025-03-23 13:56:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:04.525416 | orchestrator | 2025-03-23 13:56:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:04.527296 | orchestrator | 2025-03-23 13:56:04 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:04.527339 | orchestrator | 2025-03-23 13:56:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:07.570355 | orchestrator | 2025-03-23 13:56:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:07.572082 | orchestrator | 2025-03-23 13:56:07 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:10.612945 | orchestrator | 2025-03-23 13:56:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:10.613075 | orchestrator | 2025-03-23 13:56:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:10.614504 | orchestrator | 2025-03-23 13:56:10 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:13.652407 | orchestrator | 2025-03-23 13:56:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:13.652558 | orchestrator | 2025-03-23 13:56:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:13.653735 | orchestrator | 2025-03-23 13:56:13 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:16.696789 | orchestrator | 2025-03-23 13:56:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:16.696950 | orchestrator | 2025-03-23 13:56:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:16.697683 | orchestrator | 2025-03-23 13:56:16 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:19.755543 | orchestrator | 2025-03-23 13:56:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:19.755671 | orchestrator | 2025-03-23 13:56:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:19.758999 | orchestrator | 2025-03-23 13:56:19 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:22.797025 | orchestrator | 2025-03-23 13:56:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:22.797159 | orchestrator | 2025-03-23 13:56:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:22.798895 | orchestrator | 2025-03-23 13:56:22 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state STARTED 2025-03-23 13:56:25.849592 | orchestrator | 2025-03-23 13:56:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:25.849728 | orchestrator | 2025-03-23 13:56:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:25.853353 | orchestrator | 2025-03-23 13:56:25 | INFO  | Task c85b7a2c-673d-4e80-a662-53688b4de508 is in state SUCCESS 2025-03-23 13:56:25.854708 | orchestrator | 2025-03-23 13:56:25.854843 | orchestrator | 2025-03-23 13:56:25.854865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 13:56:25.854880 | orchestrator | 2025-03-23 13:56:25.854894 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-03-23 13:56:25.854909 | orchestrator | Sunday 23 March 2025 13:46:47 +0000 (0:00:00.950) 0:00:00.950 ********** 2025-03-23 13:56:25.854975 | orchestrator | changed: [testbed-manager] 2025-03-23 13:56:25.855389 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.855410 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.855425 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.855462 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.855603 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.855619 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.855633 | orchestrator | 2025-03-23 13:56:25.855648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 13:56:25.855662 | orchestrator | Sunday 23 March 2025 13:46:50 +0000 (0:00:03.000) 0:00:03.951 ********** 2025-03-23 13:56:25.855677 | orchestrator | changed: [testbed-manager] 2025-03-23 13:56:25.855691 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.855936 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.855953 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.855967 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.855982 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.855997 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.856100 | orchestrator | 2025-03-23 13:56:25.856116 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 13:56:25.856131 | orchestrator | Sunday 23 March 2025 13:46:56 +0000 (0:00:05.538) 0:00:09.489 ********** 2025-03-23 13:56:25.856145 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-03-23 13:56:25.856159 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-03-23 13:56:25.856173 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-03-23 13:56:25.856187 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-03-23 13:56:25.856201 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-03-23 13:56:25.856214 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-03-23 13:56:25.856228 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-03-23 13:56:25.856242 | orchestrator | 2025-03-23 13:56:25.856256 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-03-23 13:56:25.856296 | orchestrator | 2025-03-23 13:56:25.856311 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-03-23 13:56:25.856324 | orchestrator | Sunday 23 March 2025 13:46:59 +0000 (0:00:02.818) 0:00:12.307 ********** 2025-03-23 13:56:25.856338 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.856352 | orchestrator | 2025-03-23 13:56:25.856366 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-03-23 13:56:25.856380 | orchestrator | Sunday 23 March 2025 13:47:01 +0000 (0:00:02.299) 0:00:14.606 ********** 2025-03-23 13:56:25.856395 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-03-23 13:56:25.856409 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-03-23 13:56:25.856423 | orchestrator | 2025-03-23 13:56:25.856459 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-03-23 13:56:25.856474 | orchestrator | Sunday 23 March 2025 13:47:06 +0000 (0:00:05.550) 0:00:20.157 ********** 2025-03-23 13:56:25.856488 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:56:25.856502 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-23 13:56:25.856516 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.856530 | orchestrator | 2025-03-23 13:56:25.857082 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-03-23 13:56:25.857108 | orchestrator | Sunday 23 March 2025 13:47:12 +0000 (0:00:05.866) 0:00:26.024 ********** 2025-03-23 13:56:25.857122 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.857136 | orchestrator | 2025-03-23 13:56:25.857150 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-03-23 13:56:25.857164 | orchestrator | Sunday 23 March 2025 13:47:13 +0000 (0:00:01.099) 0:00:27.123 ********** 2025-03-23 13:56:25.857178 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.857192 | orchestrator | 2025-03-23 13:56:25.857206 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-03-23 13:56:25.857219 | orchestrator | Sunday 23 March 2025 13:47:16 +0000 (0:00:03.036) 0:00:30.160 ********** 2025-03-23 13:56:25.857233 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.857286 | orchestrator | 2025-03-23 13:56:25.857302 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-23 13:56:25.857316 | orchestrator | Sunday 23 March 2025 13:47:26 +0000 (0:00:10.099) 0:00:40.259 ********** 2025-03-23 13:56:25.857503 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.857519 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.857533 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.857547 | orchestrator | 2025-03-23 13:56:25.857915 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-03-23 13:56:25.857938 | orchestrator | Sunday 23 March 2025 13:47:28 +0000 (0:00:01.723) 0:00:41.982 ********** 2025-03-23 13:56:25.857952 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.857966 | orchestrator | 2025-03-23 13:56:25.857980 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-03-23 13:56:25.857994 | orchestrator | Sunday 23 March 2025 13:48:00 +0000 (0:00:31.305) 0:01:13.288 ********** 2025-03-23 13:56:25.858008 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.858115 | orchestrator | 2025-03-23 13:56:25.858131 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-23 13:56:25.858469 | orchestrator | Sunday 23 March 2025 13:48:15 +0000 (0:00:15.105) 0:01:28.393 ********** 2025-03-23 13:56:25.858485 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.858499 | orchestrator | 2025-03-23 13:56:25.858513 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-23 13:56:25.858536 | orchestrator | Sunday 23 March 2025 13:48:26 +0000 (0:00:11.572) 0:01:39.966 ********** 2025-03-23 13:56:25.858919 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.858943 | orchestrator | 2025-03-23 13:56:25.858958 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-03-23 13:56:25.858987 | orchestrator | Sunday 23 March 2025 13:48:28 +0000 (0:00:02.198) 0:01:42.166 ********** 2025-03-23 13:56:25.859001 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.859015 | orchestrator | 2025-03-23 13:56:25.859029 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-23 13:56:25.859043 | orchestrator | Sunday 23 March 2025 13:48:30 +0000 (0:00:01.602) 0:01:43.769 ********** 2025-03-23 13:56:25.859057 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.859071 | orchestrator | 2025-03-23 13:56:25.859086 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-03-23 13:56:25.859174 | orchestrator | Sunday 23 March 2025 13:48:31 +0000 (0:00:01.299) 0:01:45.069 ********** 2025-03-23 13:56:25.859263 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.859278 | orchestrator | 2025-03-23 13:56:25.859292 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-03-23 13:56:25.859306 | orchestrator | Sunday 23 March 2025 13:48:50 +0000 (0:00:18.918) 0:02:03.987 ********** 2025-03-23 13:56:25.859533 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.859556 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.859571 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.859585 | orchestrator | 2025-03-23 13:56:25.859599 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-03-23 13:56:25.859613 | orchestrator | 2025-03-23 13:56:25.859626 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-03-23 13:56:25.859640 | orchestrator | Sunday 23 March 2025 13:48:51 +0000 (0:00:01.031) 0:02:05.020 ********** 2025-03-23 13:56:25.859654 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.859668 | orchestrator | 2025-03-23 13:56:25.859682 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-03-23 13:56:25.859695 | orchestrator | Sunday 23 March 2025 13:48:54 +0000 (0:00:02.815) 0:02:07.836 ********** 2025-03-23 13:56:25.859709 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.859723 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.859737 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.859751 | orchestrator | 2025-03-23 13:56:25.859764 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-03-23 13:56:25.859778 | orchestrator | Sunday 23 March 2025 13:48:57 +0000 (0:00:02.660) 0:02:10.496 ********** 2025-03-23 13:56:25.859792 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.859806 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.859820 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.859834 | orchestrator | 2025-03-23 13:56:25.859848 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-03-23 13:56:25.859861 | orchestrator | Sunday 23 March 2025 13:48:59 +0000 (0:00:02.463) 0:02:12.959 ********** 2025-03-23 13:56:25.859875 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.859889 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.859903 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.859917 | orchestrator | 2025-03-23 13:56:25.859931 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-03-23 13:56:25.859945 | orchestrator | Sunday 23 March 2025 13:49:00 +0000 (0:00:00.512) 0:02:13.472 ********** 2025-03-23 13:56:25.859958 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-23 13:56:25.859972 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.859986 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-23 13:56:25.860000 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860014 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-03-23 13:56:25.860028 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-03-23 13:56:25.860041 | orchestrator | 2025-03-23 13:56:25.860055 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-03-23 13:56:25.860069 | orchestrator | Sunday 23 March 2025 13:49:09 +0000 (0:00:09.475) 0:02:22.947 ********** 2025-03-23 13:56:25.860093 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.860107 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860121 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860135 | orchestrator | 2025-03-23 13:56:25.860149 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-03-23 13:56:25.860163 | orchestrator | Sunday 23 March 2025 13:49:10 +0000 (0:00:00.443) 0:02:23.391 ********** 2025-03-23 13:56:25.860177 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-23 13:56:25.860193 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-23 13:56:25.860208 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.860224 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860239 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-23 13:56:25.860254 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860269 | orchestrator | 2025-03-23 13:56:25.860285 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-03-23 13:56:25.860300 | orchestrator | Sunday 23 March 2025 13:49:11 +0000 (0:00:01.541) 0:02:24.932 ********** 2025-03-23 13:56:25.860316 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860331 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860347 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.860362 | orchestrator | 2025-03-23 13:56:25.860378 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-03-23 13:56:25.860394 | orchestrator | Sunday 23 March 2025 13:49:12 +0000 (0:00:00.744) 0:02:25.676 ********** 2025-03-23 13:56:25.860409 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860424 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860495 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.860511 | orchestrator | 2025-03-23 13:56:25.860527 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-03-23 13:56:25.860541 | orchestrator | Sunday 23 March 2025 13:49:13 +0000 (0:00:01.047) 0:02:26.724 ********** 2025-03-23 13:56:25.860555 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860569 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860668 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.860689 | orchestrator | 2025-03-23 13:56:25.860703 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-03-23 13:56:25.860725 | orchestrator | Sunday 23 March 2025 13:49:16 +0000 (0:00:02.614) 0:02:29.338 ********** 2025-03-23 13:56:25.860740 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860754 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860767 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.860782 | orchestrator | 2025-03-23 13:56:25.860796 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-23 13:56:25.860809 | orchestrator | Sunday 23 March 2025 13:49:36 +0000 (0:00:20.129) 0:02:49.469 ********** 2025-03-23 13:56:25.860823 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860837 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860851 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.860866 | orchestrator | 2025-03-23 13:56:25.860880 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-23 13:56:25.860894 | orchestrator | Sunday 23 March 2025 13:49:50 +0000 (0:00:14.079) 0:03:03.549 ********** 2025-03-23 13:56:25.860907 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.860921 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.860940 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.860954 | orchestrator | 2025-03-23 13:56:25.860968 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-03-23 13:56:25.860982 | orchestrator | Sunday 23 March 2025 13:49:52 +0000 (0:00:01.743) 0:03:05.293 ********** 2025-03-23 13:56:25.860996 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.861008 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.861021 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.861041 | orchestrator | 2025-03-23 13:56:25.861053 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-03-23 13:56:25.861066 | orchestrator | Sunday 23 March 2025 13:50:05 +0000 (0:00:13.259) 0:03:18.552 ********** 2025-03-23 13:56:25.861078 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.861090 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.861102 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.861115 | orchestrator | 2025-03-23 13:56:25.861127 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-03-23 13:56:25.861139 | orchestrator | Sunday 23 March 2025 13:50:07 +0000 (0:00:01.767) 0:03:20.320 ********** 2025-03-23 13:56:25.861151 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.861164 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.861176 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.861188 | orchestrator | 2025-03-23 13:56:25.861219 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-03-23 13:56:25.861233 | orchestrator | 2025-03-23 13:56:25.861246 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-23 13:56:25.861259 | orchestrator | Sunday 23 March 2025 13:50:07 +0000 (0:00:00.526) 0:03:20.846 ********** 2025-03-23 13:56:25.861271 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.861286 | orchestrator | 2025-03-23 13:56:25.861300 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-03-23 13:56:25.861314 | orchestrator | Sunday 23 March 2025 13:50:08 +0000 (0:00:00.904) 0:03:21.751 ********** 2025-03-23 13:56:25.861328 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-03-23 13:56:25.861342 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-03-23 13:56:25.861356 | orchestrator | 2025-03-23 13:56:25.861370 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-03-23 13:56:25.861384 | orchestrator | Sunday 23 March 2025 13:50:12 +0000 (0:00:03.893) 0:03:25.645 ********** 2025-03-23 13:56:25.861398 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-03-23 13:56:25.861413 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-03-23 13:56:25.861427 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-03-23 13:56:25.861495 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-03-23 13:56:25.861510 | orchestrator | 2025-03-23 13:56:25.861524 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-03-23 13:56:25.861538 | orchestrator | Sunday 23 March 2025 13:50:19 +0000 (0:00:07.485) 0:03:33.130 ********** 2025-03-23 13:56:25.861552 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-23 13:56:25.861565 | orchestrator | 2025-03-23 13:56:25.861579 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-03-23 13:56:25.861591 | orchestrator | Sunday 23 March 2025 13:50:23 +0000 (0:00:03.369) 0:03:36.499 ********** 2025-03-23 13:56:25.861602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-23 13:56:25.861614 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-03-23 13:56:25.861625 | orchestrator | 2025-03-23 13:56:25.861636 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-03-23 13:56:25.861647 | orchestrator | Sunday 23 March 2025 13:50:27 +0000 (0:00:04.710) 0:03:41.210 ********** 2025-03-23 13:56:25.861657 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-23 13:56:25.861667 | orchestrator | 2025-03-23 13:56:25.861677 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-03-23 13:56:25.861687 | orchestrator | Sunday 23 March 2025 13:50:31 +0000 (0:00:03.509) 0:03:44.720 ********** 2025-03-23 13:56:25.861698 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-03-23 13:56:25.861714 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-03-23 13:56:25.861724 | orchestrator | 2025-03-23 13:56:25.861734 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-03-23 13:56:25.861807 | orchestrator | Sunday 23 March 2025 13:50:40 +0000 (0:00:09.146) 0:03:53.866 ********** 2025-03-23 13:56:25.861825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.861838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.861879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.861891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.861965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.861980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.861991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.862046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862060 | orchestrator | 2025-03-23 13:56:25.862070 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-03-23 13:56:25.862087 | orchestrator | Sunday 23 March 2025 13:50:42 +0000 (0:00:01.921) 0:03:55.787 ********** 2025-03-23 13:56:25.862098 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.862108 | orchestrator | 2025-03-23 13:56:25.862118 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-03-23 13:56:25.862128 | orchestrator | Sunday 23 March 2025 13:50:42 +0000 (0:00:00.135) 0:03:55.923 ********** 2025-03-23 13:56:25.862138 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.862149 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.862159 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.862169 | orchestrator | 2025-03-23 13:56:25.862179 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-03-23 13:56:25.862189 | orchestrator | Sunday 23 March 2025 13:50:43 +0000 (0:00:00.576) 0:03:56.500 ********** 2025-03-23 13:56:25.862200 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 13:56:25.862210 | orchestrator | 2025-03-23 13:56:25.862290 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-03-23 13:56:25.862307 | orchestrator | Sunday 23 March 2025 13:50:43 +0000 (0:00:00.460) 0:03:56.961 ********** 2025-03-23 13:56:25.862318 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.862329 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.862339 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.862350 | orchestrator | 2025-03-23 13:56:25.862360 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-23 13:56:25.862371 | orchestrator | Sunday 23 March 2025 13:50:44 +0000 (0:00:00.357) 0:03:57.318 ********** 2025-03-23 13:56:25.862387 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.862425 | orchestrator | 2025-03-23 13:56:25.862452 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-03-23 13:56:25.862463 | orchestrator | Sunday 23 March 2025 13:50:45 +0000 (0:00:01.351) 0:03:58.670 ********** 2025-03-23 13:56:25.862484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.862496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.862583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.862600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.862611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.862621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.862632 | orchestrator | 2025-03-23 13:56:25.862643 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-03-23 13:56:25.862653 | orchestrator | Sunday 23 March 2025 13:50:48 +0000 (0:00:03.071) 0:04:01.742 ********** 2025-03-23 13:56:25.862671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.862688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862750 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.862765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.862776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862787 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.862797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.862824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862834 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.862845 | orchestrator | 2025-03-23 13:56:25.862855 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-03-23 13:56:25.862865 | orchestrator | Sunday 23 March 2025 13:50:49 +0000 (0:00:00.893) 0:04:02.635 ********** 2025-03-23 13:56:25.862944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.862974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.862985 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.862995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.863012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863023 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.863089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.863105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863115 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.863125 | orchestrator | 2025-03-23 13:56:25.863136 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-03-23 13:56:25.863146 | orchestrator | Sunday 23 March 2025 13:50:50 +0000 (0:00:01.087) 0:04:03.723 ********** 2025-03-23 13:56:25.863156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863422 | orchestrator | 2025-03-23 13:56:25.863448 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-03-23 13:56:25.863460 | orchestrator | Sunday 23 March 2025 13:50:53 +0000 (0:00:02.745) 0:04:06.469 ********** 2025-03-23 13:56:25.863470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.863583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.863732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863747 | orchestrator | 2025-03-23 13:56:25.863759 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-03-23 13:56:25.863769 | orchestrator | Sunday 23 March 2025 13:51:00 +0000 (0:00:07.304) 0:04:13.773 ********** 2025-03-23 13:56:25.863780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.863810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863832 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.863843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.863931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 13:56:25.863955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.863990 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.864001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.864012 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.864023 | orchestrator | 2025-03-23 13:56:25.864034 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-03-23 13:56:25.864045 | orchestrator | Sunday 23 March 2025 13:51:01 +0000 (0:00:00.943) 0:04:14.717 ********** 2025-03-23 13:56:25.864056 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.864066 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.864077 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.864088 | orchestrator | 2025-03-23 13:56:25.864098 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-03-23 13:56:25.864109 | orchestrator | Sunday 23 March 2025 13:51:03 +0000 (0:00:01.920) 0:04:16.637 ********** 2025-03-23 13:56:25.864169 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.864184 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.864208 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.864219 | orchestrator | 2025-03-23 13:56:25.864230 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-03-23 13:56:25.864241 | orchestrator | Sunday 23 March 2025 13:51:03 +0000 (0:00:00.517) 0:04:17.155 ********** 2025-03-23 13:56:25.864258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.864281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.864292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 13:56:25.864356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.864378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.864389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.864411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.864422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.864480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.864492 | orchestrator | 2025-03-23 13:56:25.864503 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-23 13:56:25.864513 | orchestrator | Sunday 23 March 2025 13:51:06 +0000 (0:00:02.609) 0:04:19.764 ********** 2025-03-23 13:56:25.864523 | orchestrator | 2025-03-23 13:56:25.864534 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-23 13:56:25.864544 | orchestrator | Sunday 23 March 2025 13:51:06 +0000 (0:00:00.318) 0:04:20.083 ********** 2025-03-23 13:56:25.864554 | orchestrator | 2025-03-23 13:56:25.864564 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-23 13:56:25.864580 | orchestrator | Sunday 23 March 2025 13:51:06 +0000 (0:00:00.120) 0:04:20.203 ********** 2025-03-23 13:56:25.864590 | orchestrator | 2025-03-23 13:56:25.864600 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-03-23 13:56:25.864671 | orchestrator | Sunday 23 March 2025 13:51:07 +0000 (0:00:00.351) 0:04:20.554 ********** 2025-03-23 13:56:25.864687 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.864698 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.864709 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.864719 | orchestrator | 2025-03-23 13:56:25.864730 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-03-23 13:56:25.864740 | orchestrator | Sunday 23 March 2025 13:51:26 +0000 (0:00:19.284) 0:04:39.839 ********** 2025-03-23 13:56:25.864751 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.864761 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.864772 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.864782 | orchestrator | 2025-03-23 13:56:25.864793 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-03-23 13:56:25.864803 | orchestrator | 2025-03-23 13:56:25.864814 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-23 13:56:25.864824 | orchestrator | Sunday 23 March 2025 13:51:37 +0000 (0:00:10.996) 0:04:50.835 ********** 2025-03-23 13:56:25.864840 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.864852 | orchestrator | 2025-03-23 13:56:25.864862 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-23 13:56:25.864886 | orchestrator | Sunday 23 March 2025 13:51:39 +0000 (0:00:01.605) 0:04:52.441 ********** 2025-03-23 13:56:25.864898 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.864909 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.864919 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.864930 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.864941 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.864951 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.864962 | orchestrator | 2025-03-23 13:56:25.864972 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-03-23 13:56:25.864983 | orchestrator | Sunday 23 March 2025 13:51:39 +0000 (0:00:00.831) 0:04:53.273 ********** 2025-03-23 13:56:25.864994 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.865003 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.865012 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.865021 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:56:25.865030 | orchestrator | 2025-03-23 13:56:25.865039 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-23 13:56:25.865048 | orchestrator | Sunday 23 March 2025 13:51:41 +0000 (0:00:01.558) 0:04:54.832 ********** 2025-03-23 13:56:25.865057 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-03-23 13:56:25.865066 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-03-23 13:56:25.865075 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-03-23 13:56:25.865084 | orchestrator | 2025-03-23 13:56:25.865093 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-23 13:56:25.865102 | orchestrator | Sunday 23 March 2025 13:51:42 +0000 (0:00:00.713) 0:04:55.545 ********** 2025-03-23 13:56:25.865111 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-03-23 13:56:25.865120 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-03-23 13:56:25.865129 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-03-23 13:56:25.865138 | orchestrator | 2025-03-23 13:56:25.865147 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-23 13:56:25.865156 | orchestrator | Sunday 23 March 2025 13:51:44 +0000 (0:00:01.852) 0:04:57.397 ********** 2025-03-23 13:56:25.865170 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-03-23 13:56:25.865179 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.865188 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-03-23 13:56:25.865197 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.865206 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-03-23 13:56:25.865215 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.865231 | orchestrator | 2025-03-23 13:56:25.865240 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-03-23 13:56:25.865249 | orchestrator | Sunday 23 March 2025 13:51:45 +0000 (0:00:01.010) 0:04:58.408 ********** 2025-03-23 13:56:25.865258 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 13:56:25.865267 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 13:56:25.865276 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.865286 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-23 13:56:25.865295 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 13:56:25.865304 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 13:56:25.865313 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-23 13:56:25.865322 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.865331 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-23 13:56:25.865340 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-23 13:56:25.865349 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 13:56:25.865358 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 13:56:25.865366 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.865375 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-23 13:56:25.865384 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-23 13:56:25.865393 | orchestrator | 2025-03-23 13:56:25.865462 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-03-23 13:56:25.865480 | orchestrator | Sunday 23 March 2025 13:51:47 +0000 (0:00:01.983) 0:05:00.392 ********** 2025-03-23 13:56:25.865489 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.865498 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.865508 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.865517 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.865525 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.865534 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.865543 | orchestrator | 2025-03-23 13:56:25.865552 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-03-23 13:56:25.865562 | orchestrator | Sunday 23 March 2025 13:51:48 +0000 (0:00:01.151) 0:05:01.544 ********** 2025-03-23 13:56:25.865571 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.865579 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.865588 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.865598 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.865607 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.865615 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.865624 | orchestrator | 2025-03-23 13:56:25.865633 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-03-23 13:56:25.865642 | orchestrator | Sunday 23 March 2025 13:51:50 +0000 (0:00:02.031) 0:05:03.576 ********** 2025-03-23 13:56:25.865653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.865669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.865679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.865745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.865759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.865769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.865784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.865794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.865805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.865816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.865879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.865892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.865907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.865917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.865935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.865944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.866046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.866158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.866208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.866332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.866460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.866471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.866651 | orchestrator | 2025-03-23 13:56:25.866660 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-23 13:56:25.866668 | orchestrator | Sunday 23 March 2025 13:51:53 +0000 (0:00:02.835) 0:05:06.411 ********** 2025-03-23 13:56:25.866677 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 13:56:25.866694 | orchestrator | 2025-03-23 13:56:25.866703 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-03-23 13:56:25.866712 | orchestrator | Sunday 23 March 2025 13:51:54 +0000 (0:00:01.827) 0:05:08.239 ********** 2025-03-23 13:56:25.866765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.866968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.867032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.867056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.867066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.867075 | orchestrator | 2025-03-23 13:56:25.867084 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-03-23 13:56:25.867093 | orchestrator | Sunday 23 March 2025 13:51:59 +0000 (0:00:04.791) 0:05:13.030 ********** 2025-03-23 13:56:25.867102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867196 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.867206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867239 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.867248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867349 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.867358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867377 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.867386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867410 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.867477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867511 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.867520 | orchestrator | 2025-03-23 13:56:25.867529 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-03-23 13:56:25.867538 | orchestrator | Sunday 23 March 2025 13:52:01 +0000 (0:00:02.083) 0:05:15.114 ********** 2025-03-23 13:56:25.867547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867584 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.867622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867664 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.867673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867697 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.867715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.867745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.867757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867766 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.867776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867800 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.867809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.867819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.867836 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.867846 | orchestrator | 2025-03-23 13:56:25.867855 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-23 13:56:25.867865 | orchestrator | Sunday 23 March 2025 13:52:04 +0000 (0:00:02.757) 0:05:17.872 ********** 2025-03-23 13:56:25.867874 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.867884 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.867893 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.867902 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 13:56:25.867912 | orchestrator | 2025-03-23 13:56:25.867921 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-03-23 13:56:25.867930 | orchestrator | Sunday 23 March 2025 13:52:05 +0000 (0:00:01.320) 0:05:19.192 ********** 2025-03-23 13:56:25.867959 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:56:25.867969 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-23 13:56:25.867979 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-23 13:56:25.867987 | orchestrator | 2025-03-23 13:56:25.867996 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-03-23 13:56:25.868005 | orchestrator | Sunday 23 March 2025 13:52:06 +0000 (0:00:00.899) 0:05:20.092 ********** 2025-03-23 13:56:25.868014 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:56:25.868023 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-23 13:56:25.868032 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-23 13:56:25.868041 | orchestrator | 2025-03-23 13:56:25.868050 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-03-23 13:56:25.868058 | orchestrator | Sunday 23 March 2025 13:52:07 +0000 (0:00:00.901) 0:05:20.994 ********** 2025-03-23 13:56:25.868067 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:56:25.868076 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:56:25.868085 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:56:25.868094 | orchestrator | 2025-03-23 13:56:25.868103 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-03-23 13:56:25.868112 | orchestrator | Sunday 23 March 2025 13:52:08 +0000 (0:00:00.981) 0:05:21.976 ********** 2025-03-23 13:56:25.868122 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:56:25.868131 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:56:25.868141 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:56:25.868155 | orchestrator | 2025-03-23 13:56:25.868165 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-03-23 13:56:25.868175 | orchestrator | Sunday 23 March 2025 13:52:09 +0000 (0:00:00.342) 0:05:22.318 ********** 2025-03-23 13:56:25.868185 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-23 13:56:25.868194 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-23 13:56:25.868204 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-23 13:56:25.868213 | orchestrator | 2025-03-23 13:56:25.868223 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-03-23 13:56:25.868232 | orchestrator | Sunday 23 March 2025 13:52:10 +0000 (0:00:01.482) 0:05:23.801 ********** 2025-03-23 13:56:25.868242 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-23 13:56:25.868251 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-23 13:56:25.868261 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-23 13:56:25.868271 | orchestrator | 2025-03-23 13:56:25.868280 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-03-23 13:56:25.868290 | orchestrator | Sunday 23 March 2025 13:52:12 +0000 (0:00:01.580) 0:05:25.381 ********** 2025-03-23 13:56:25.868299 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-23 13:56:25.868308 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-23 13:56:25.868318 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-23 13:56:25.868327 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-03-23 13:56:25.868340 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-03-23 13:56:25.868350 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-03-23 13:56:25.868359 | orchestrator | 2025-03-23 13:56:25.868369 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-03-23 13:56:25.868379 | orchestrator | Sunday 23 March 2025 13:52:18 +0000 (0:00:06.584) 0:05:31.966 ********** 2025-03-23 13:56:25.868388 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.868397 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.868407 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.868416 | orchestrator | 2025-03-23 13:56:25.868426 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-03-23 13:56:25.868453 | orchestrator | Sunday 23 March 2025 13:52:19 +0000 (0:00:00.384) 0:05:32.351 ********** 2025-03-23 13:56:25.868463 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.868472 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.868481 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.868489 | orchestrator | 2025-03-23 13:56:25.868498 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-03-23 13:56:25.868506 | orchestrator | Sunday 23 March 2025 13:52:19 +0000 (0:00:00.608) 0:05:32.959 ********** 2025-03-23 13:56:25.868515 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.868523 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.868531 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.868540 | orchestrator | 2025-03-23 13:56:25.868549 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-03-23 13:56:25.868557 | orchestrator | Sunday 23 March 2025 13:52:21 +0000 (0:00:01.797) 0:05:34.757 ********** 2025-03-23 13:56:25.868566 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-23 13:56:25.868575 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-23 13:56:25.868584 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-23 13:56:25.868593 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-23 13:56:25.868607 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-23 13:56:25.868616 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-23 13:56:25.868624 | orchestrator | 2025-03-23 13:56:25.868633 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-03-23 13:56:25.868661 | orchestrator | Sunday 23 March 2025 13:52:25 +0000 (0:00:04.045) 0:05:38.803 ********** 2025-03-23 13:56:25.868671 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:56:25.868680 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:56:25.868688 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:56:25.868697 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-23 13:56:25.868705 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.868720 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-23 13:56:25.868730 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.868739 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-23 13:56:25.868748 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.868757 | orchestrator | 2025-03-23 13:56:25.868765 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-03-23 13:56:25.868774 | orchestrator | Sunday 23 March 2025 13:52:29 +0000 (0:00:03.734) 0:05:42.537 ********** 2025-03-23 13:56:25.868783 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.868791 | orchestrator | 2025-03-23 13:56:25.868800 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-03-23 13:56:25.868808 | orchestrator | Sunday 23 March 2025 13:52:29 +0000 (0:00:00.144) 0:05:42.682 ********** 2025-03-23 13:56:25.868817 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.868825 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.868834 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.868842 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.868851 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.868859 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.868867 | orchestrator | 2025-03-23 13:56:25.868876 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-03-23 13:56:25.868884 | orchestrator | Sunday 23 March 2025 13:52:30 +0000 (0:00:01.018) 0:05:43.701 ********** 2025-03-23 13:56:25.868893 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-23 13:56:25.868902 | orchestrator | 2025-03-23 13:56:25.868910 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-03-23 13:56:25.868918 | orchestrator | Sunday 23 March 2025 13:52:30 +0000 (0:00:00.502) 0:05:44.203 ********** 2025-03-23 13:56:25.868927 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.868936 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.868944 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.868952 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.868961 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.868969 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.868978 | orchestrator | 2025-03-23 13:56:25.868986 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-03-23 13:56:25.868995 | orchestrator | Sunday 23 March 2025 13:52:31 +0000 (0:00:01.029) 0:05:45.233 ********** 2025-03-23 13:56:25.869004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.869739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869749 | orchestrator | 2025-03-23 13:56:25.869758 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-03-23 13:56:25.869766 | orchestrator | Sunday 23 March 2025 13:52:36 +0000 (0:00:04.081) 0:05:49.315 ********** 2025-03-23 13:56:25.869775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.869933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.869956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.869964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.869988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.869996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.870068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.870077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.870093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.870102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.870110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.870137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.870241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.870249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.870266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.870296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.870320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.870328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.870424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.870456 | orchestrator | 2025-03-23 13:56:25.870464 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-03-23 13:56:25.870472 | orchestrator | Sunday 23 March 2025 13:52:45 +0000 (0:00:08.968) 0:05:58.283 ********** 2025-03-23 13:56:25.870480 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.870488 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.870496 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.870504 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.870516 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.870524 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.870532 | orchestrator | 2025-03-23 13:56:25.870540 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-03-23 13:56:25.870548 | orchestrator | Sunday 23 March 2025 13:52:47 +0000 (0:00:02.055) 0:06:00.338 ********** 2025-03-23 13:56:25.870556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-23 13:56:25.870564 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-23 13:56:25.870572 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-23 13:56:25.870580 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-23 13:56:25.870606 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-23 13:56:25.870615 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.870623 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-23 13:56:25.870631 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.870639 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-23 13:56:25.870648 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.870655 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-23 13:56:25.870663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-23 13:56:25.870671 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-23 13:56:25.870679 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-23 13:56:25.870687 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-23 13:56:25.870695 | orchestrator | 2025-03-23 13:56:25.870703 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-03-23 13:56:25.870711 | orchestrator | Sunday 23 March 2025 13:52:53 +0000 (0:00:06.265) 0:06:06.604 ********** 2025-03-23 13:56:25.870719 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.870726 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.870734 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.870742 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.870750 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.870758 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.870766 | orchestrator | 2025-03-23 13:56:25.870774 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-03-23 13:56:25.870785 | orchestrator | Sunday 23 March 2025 13:52:54 +0000 (0:00:01.095) 0:06:07.700 ********** 2025-03-23 13:56:25.870794 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-23 13:56:25.870802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-23 13:56:25.870810 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-23 13:56:25.870818 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870826 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-23 13:56:25.870834 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870842 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870850 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-23 13:56:25.870864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870872 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.870880 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-23 13:56:25.870888 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870895 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.870903 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-23 13:56:25.870911 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.870919 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870935 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870943 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870950 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870958 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-23 13:56:25.870966 | orchestrator | 2025-03-23 13:56:25.870974 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-03-23 13:56:25.870982 | orchestrator | Sunday 23 March 2025 13:53:03 +0000 (0:00:09.558) 0:06:17.258 ********** 2025-03-23 13:56:25.870990 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:56:25.870998 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:56:25.871022 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-23 13:56:25.871031 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 13:56:25.871039 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:56:25.871047 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-23 13:56:25.871055 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-23 13:56:25.871063 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.871071 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-23 13:56:25.871079 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:56:25.871087 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:56:25.871095 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-23 13:56:25.871102 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:56:25.871110 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 13:56:25.871118 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-23 13:56:25.871126 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.871134 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-23 13:56:25.871142 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.871154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:56:25.871162 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:56:25.871170 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-23 13:56:25.871178 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:56:25.871186 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:56:25.871193 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-23 13:56:25.871201 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:56:25.871209 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:56:25.871217 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-23 13:56:25.871225 | orchestrator | 2025-03-23 13:56:25.871233 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-03-23 13:56:25.871241 | orchestrator | Sunday 23 March 2025 13:53:17 +0000 (0:00:13.218) 0:06:30.477 ********** 2025-03-23 13:56:25.871248 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.871256 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.871264 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.871272 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.871280 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.871288 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.871296 | orchestrator | 2025-03-23 13:56:25.871304 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-03-23 13:56:25.871312 | orchestrator | Sunday 23 March 2025 13:53:18 +0000 (0:00:00.902) 0:06:31.379 ********** 2025-03-23 13:56:25.871320 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.871328 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.871336 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.871344 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.871352 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.871359 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.871367 | orchestrator | 2025-03-23 13:56:25.871375 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-03-23 13:56:25.871383 | orchestrator | Sunday 23 March 2025 13:53:19 +0000 (0:00:01.054) 0:06:32.434 ********** 2025-03-23 13:56:25.871391 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.871402 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.871410 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.871418 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.871426 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.871445 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.871454 | orchestrator | 2025-03-23 13:56:25.871462 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-03-23 13:56:25.871470 | orchestrator | Sunday 23 March 2025 13:53:23 +0000 (0:00:04.512) 0:06:36.947 ********** 2025-03-23 13:56:25.871496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.871544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871603 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.871612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.871679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871710 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.871718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.871848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.871857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.871874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.871914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.871922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.871981 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.871989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.871998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872027 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.872042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872051 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.872063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872096 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.872105 | orchestrator | 2025-03-23 13:56:25.872113 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-03-23 13:56:25.872121 | orchestrator | Sunday 23 March 2025 13:53:26 +0000 (0:00:02.651) 0:06:39.598 ********** 2025-03-23 13:56:25.872129 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-03-23 13:56:25.872137 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872145 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.872157 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-03-23 13:56:25.872166 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872173 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.872182 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-03-23 13:56:25.872190 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872198 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.872205 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-03-23 13:56:25.872213 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872221 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.872229 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-03-23 13:56:25.872237 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872245 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.872253 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-03-23 13:56:25.872261 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-03-23 13:56:25.872269 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.872277 | orchestrator | 2025-03-23 13:56:25.872285 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-03-23 13:56:25.872293 | orchestrator | Sunday 23 March 2025 13:53:27 +0000 (0:00:00.965) 0:06:40.563 ********** 2025-03-23 13:56:25.872311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.872321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.872329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.872338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.872351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-23 13:56:25.872387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-23 13:56:25.872396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-23 13:56:25.872676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 13:56:25.872689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-23 13:56:25.872898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-23 13:56:25.872907 | orchestrator | 2025-03-23 13:56:25.872915 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-23 13:56:25.872926 | orchestrator | Sunday 23 March 2025 13:53:31 +0000 (0:00:03.961) 0:06:44.525 ********** 2025-03-23 13:56:25.872935 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.872943 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.872951 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.872959 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.872967 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.872975 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.872982 | orchestrator | 2025-03-23 13:56:25.872990 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.872998 | orchestrator | Sunday 23 March 2025 13:53:32 +0000 (0:00:00.845) 0:06:45.370 ********** 2025-03-23 13:56:25.873006 | orchestrator | 2025-03-23 13:56:25.873014 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.873022 | orchestrator | Sunday 23 March 2025 13:53:32 +0000 (0:00:00.343) 0:06:45.714 ********** 2025-03-23 13:56:25.873030 | orchestrator | 2025-03-23 13:56:25.873038 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.873046 | orchestrator | Sunday 23 March 2025 13:53:32 +0000 (0:00:00.138) 0:06:45.853 ********** 2025-03-23 13:56:25.873054 | orchestrator | 2025-03-23 13:56:25.873062 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.873070 | orchestrator | Sunday 23 March 2025 13:53:32 +0000 (0:00:00.330) 0:06:46.183 ********** 2025-03-23 13:56:25.873078 | orchestrator | 2025-03-23 13:56:25.873086 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.873094 | orchestrator | Sunday 23 March 2025 13:53:33 +0000 (0:00:00.132) 0:06:46.316 ********** 2025-03-23 13:56:25.873102 | orchestrator | 2025-03-23 13:56:25.873110 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-23 13:56:25.873118 | orchestrator | Sunday 23 March 2025 13:53:33 +0000 (0:00:00.345) 0:06:46.661 ********** 2025-03-23 13:56:25.873125 | orchestrator | 2025-03-23 13:56:25.873133 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-03-23 13:56:25.873141 | orchestrator | Sunday 23 March 2025 13:53:33 +0000 (0:00:00.124) 0:06:46.786 ********** 2025-03-23 13:56:25.873149 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.873157 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.873165 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.873173 | orchestrator | 2025-03-23 13:56:25.873181 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-03-23 13:56:25.873194 | orchestrator | Sunday 23 March 2025 13:53:41 +0000 (0:00:08.433) 0:06:55.220 ********** 2025-03-23 13:56:25.873202 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.873210 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.873218 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.873226 | orchestrator | 2025-03-23 13:56:25.873234 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-03-23 13:56:25.873242 | orchestrator | Sunday 23 March 2025 13:53:54 +0000 (0:00:12.159) 0:07:07.379 ********** 2025-03-23 13:56:25.873253 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.873262 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.873270 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.873278 | orchestrator | 2025-03-23 13:56:25.873286 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-03-23 13:56:25.873294 | orchestrator | Sunday 23 March 2025 13:54:16 +0000 (0:00:21.982) 0:07:29.362 ********** 2025-03-23 13:56:25.873302 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.873310 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.873318 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.873325 | orchestrator | 2025-03-23 13:56:25.873333 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-03-23 13:56:25.873341 | orchestrator | Sunday 23 March 2025 13:54:41 +0000 (0:00:25.613) 0:07:54.975 ********** 2025-03-23 13:56:25.873349 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.873357 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.873365 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.873373 | orchestrator | 2025-03-23 13:56:25.873381 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-03-23 13:56:25.873389 | orchestrator | Sunday 23 March 2025 13:54:42 +0000 (0:00:00.996) 0:07:55.972 ********** 2025-03-23 13:56:25.873397 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.873405 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.873413 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.873421 | orchestrator | 2025-03-23 13:56:25.873429 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-03-23 13:56:25.873475 | orchestrator | Sunday 23 March 2025 13:54:43 +0000 (0:00:00.830) 0:07:56.802 ********** 2025-03-23 13:56:25.873483 | orchestrator | changed: [testbed-node-4] 2025-03-23 13:56:25.873491 | orchestrator | changed: [testbed-node-3] 2025-03-23 13:56:25.873499 | orchestrator | changed: [testbed-node-5] 2025-03-23 13:56:25.873507 | orchestrator | 2025-03-23 13:56:25.873515 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-03-23 13:56:25.873523 | orchestrator | Sunday 23 March 2025 13:55:05 +0000 (0:00:22.449) 0:08:19.252 ********** 2025-03-23 13:56:25.873531 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.873539 | orchestrator | 2025-03-23 13:56:25.873547 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-03-23 13:56:25.873555 | orchestrator | Sunday 23 March 2025 13:55:06 +0000 (0:00:00.168) 0:08:19.421 ********** 2025-03-23 13:56:25.873563 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.873571 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.873579 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.873587 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.873594 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.873602 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-03-23 13:56:25.873611 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:56:25.873619 | orchestrator | 2025-03-23 13:56:25.873627 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-03-23 13:56:25.873638 | orchestrator | Sunday 23 March 2025 13:55:28 +0000 (0:00:22.612) 0:08:42.034 ********** 2025-03-23 13:56:25.873647 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.873663 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.873865 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.873882 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.873890 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.873897 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.873904 | orchestrator | 2025-03-23 13:56:25.873911 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-03-23 13:56:25.873918 | orchestrator | Sunday 23 March 2025 13:55:43 +0000 (0:00:15.040) 0:08:57.075 ********** 2025-03-23 13:56:25.873924 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.873932 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.873938 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.873945 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.873952 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.873959 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-03-23 13:56:25.873966 | orchestrator | 2025-03-23 13:56:25.873973 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-23 13:56:25.873980 | orchestrator | Sunday 23 March 2025 13:55:48 +0000 (0:00:04.672) 0:09:01.747 ********** 2025-03-23 13:56:25.873987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:56:25.873994 | orchestrator | 2025-03-23 13:56:25.874001 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-23 13:56:25.874008 | orchestrator | Sunday 23 March 2025 13:56:00 +0000 (0:00:12.513) 0:09:14.261 ********** 2025-03-23 13:56:25.874038 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:56:25.874046 | orchestrator | 2025-03-23 13:56:25.874053 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-03-23 13:56:25.874060 | orchestrator | Sunday 23 March 2025 13:56:02 +0000 (0:00:01.269) 0:09:15.531 ********** 2025-03-23 13:56:25.874067 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.874074 | orchestrator | 2025-03-23 13:56:25.874081 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-03-23 13:56:25.874087 | orchestrator | Sunday 23 March 2025 13:56:03 +0000 (0:00:01.583) 0:09:17.114 ********** 2025-03-23 13:56:25.874094 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-23 13:56:25.874101 | orchestrator | 2025-03-23 13:56:25.874108 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-03-23 13:56:25.874115 | orchestrator | Sunday 23 March 2025 13:56:14 +0000 (0:00:11.078) 0:09:28.193 ********** 2025-03-23 13:56:25.874122 | orchestrator | ok: [testbed-node-3] 2025-03-23 13:56:25.874129 | orchestrator | ok: [testbed-node-4] 2025-03-23 13:56:25.874136 | orchestrator | ok: [testbed-node-5] 2025-03-23 13:56:25.874143 | orchestrator | ok: [testbed-node-0] 2025-03-23 13:56:25.874150 | orchestrator | ok: [testbed-node-1] 2025-03-23 13:56:25.874156 | orchestrator | ok: [testbed-node-2] 2025-03-23 13:56:25.874163 | orchestrator | 2025-03-23 13:56:25.874175 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-03-23 13:56:25.874183 | orchestrator | 2025-03-23 13:56:25.874190 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-03-23 13:56:25.874197 | orchestrator | Sunday 23 March 2025 13:56:17 +0000 (0:00:02.763) 0:09:30.956 ********** 2025-03-23 13:56:25.874204 | orchestrator | changed: [testbed-node-0] 2025-03-23 13:56:25.874211 | orchestrator | changed: [testbed-node-1] 2025-03-23 13:56:25.874217 | orchestrator | changed: [testbed-node-2] 2025-03-23 13:56:25.874224 | orchestrator | 2025-03-23 13:56:25.874231 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-03-23 13:56:25.874238 | orchestrator | 2025-03-23 13:56:25.874245 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-03-23 13:56:25.874252 | orchestrator | Sunday 23 March 2025 13:56:18 +0000 (0:00:01.173) 0:09:32.129 ********** 2025-03-23 13:56:25.874258 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.874265 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.874279 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.874286 | orchestrator | 2025-03-23 13:56:25.874293 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-03-23 13:56:25.874300 | orchestrator | 2025-03-23 13:56:25.874307 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-03-23 13:56:25.874313 | orchestrator | Sunday 23 March 2025 13:56:19 +0000 (0:00:00.904) 0:09:33.034 ********** 2025-03-23 13:56:25.874320 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-03-23 13:56:25.874327 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-03-23 13:56:25.874334 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874341 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-03-23 13:56:25.874348 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-03-23 13:56:25.874354 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874361 | orchestrator | skipping: [testbed-node-3] 2025-03-23 13:56:25.874368 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-03-23 13:56:25.874375 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-03-23 13:56:25.874382 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874388 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-03-23 13:56:25.874395 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-03-23 13:56:25.874402 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874409 | orchestrator | skipping: [testbed-node-4] 2025-03-23 13:56:25.874416 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-03-23 13:56:25.874423 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-03-23 13:56:25.874441 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874451 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-03-23 13:56:25.874458 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-03-23 13:56:25.874465 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874472 | orchestrator | skipping: [testbed-node-5] 2025-03-23 13:56:25.874479 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-03-23 13:56:25.874486 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-03-23 13:56:25.874493 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874500 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-03-23 13:56:25.874507 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-03-23 13:56:25.874514 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874520 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.874527 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-03-23 13:56:25.874534 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-03-23 13:56:25.874541 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874548 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-03-23 13:56:25.874555 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-03-23 13:56:25.874562 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874569 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.874576 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-03-23 13:56:25.874583 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-03-23 13:56:25.874590 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-03-23 13:56:25.874596 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-03-23 13:56:25.874603 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-03-23 13:56:25.874614 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-03-23 13:56:25.874621 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:25.874627 | orchestrator | 2025-03-23 13:56:25.874634 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-03-23 13:56:25.874641 | orchestrator | 2025-03-23 13:56:25.874648 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-03-23 13:56:25.874655 | orchestrator | Sunday 23 March 2025 13:56:21 +0000 (0:00:01.674) 0:09:34.709 ********** 2025-03-23 13:56:25.874662 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-03-23 13:56:25.874669 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-03-23 13:56:25.874676 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:25.874683 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-03-23 13:56:25.874689 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-03-23 13:56:25.874696 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:25.874706 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-03-23 13:56:28.907928 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-03-23 13:56:28.908050 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:28.908069 | orchestrator | 2025-03-23 13:56:28.908084 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-03-23 13:56:28.908100 | orchestrator | 2025-03-23 13:56:28.908114 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-03-23 13:56:28.908127 | orchestrator | Sunday 23 March 2025 13:56:22 +0000 (0:00:00.721) 0:09:35.431 ********** 2025-03-23 13:56:28.908142 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:28.908155 | orchestrator | 2025-03-23 13:56:28.908170 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-03-23 13:56:28.908184 | orchestrator | 2025-03-23 13:56:28.908198 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-03-23 13:56:28.908212 | orchestrator | Sunday 23 March 2025 13:56:23 +0000 (0:00:01.063) 0:09:36.494 ********** 2025-03-23 13:56:28.908226 | orchestrator | skipping: [testbed-node-0] 2025-03-23 13:56:28.908240 | orchestrator | skipping: [testbed-node-1] 2025-03-23 13:56:28.908253 | orchestrator | skipping: [testbed-node-2] 2025-03-23 13:56:28.908267 | orchestrator | 2025-03-23 13:56:28.908298 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 13:56:28.908313 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 13:56:28.908329 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-03-23 13:56:28.908343 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-03-23 13:56:28.908357 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-03-23 13:56:28.908371 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-23 13:56:28.908385 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-03-23 13:56:28.908399 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-03-23 13:56:28.908500 | orchestrator | 2025-03-23 13:56:28.908517 | orchestrator | 2025-03-23 13:56:28.908532 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 13:56:28.908547 | orchestrator | Sunday 23 March 2025 13:56:23 +0000 (0:00:00.608) 0:09:37.102 ********** 2025-03-23 13:56:28.908586 | orchestrator | =============================================================================== 2025-03-23 13:56:28.908602 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.31s 2025-03-23 13:56:28.908617 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.61s 2025-03-23 13:56:28.908631 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.61s 2025-03-23 13:56:28.908653 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.45s 2025-03-23 13:56:28.908669 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.98s 2025-03-23 13:56:28.908684 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.13s 2025-03-23 13:56:28.908700 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.28s 2025-03-23 13:56:28.908715 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.92s 2025-03-23 13:56:28.908730 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.11s 2025-03-23 13:56:28.908744 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 15.04s 2025-03-23 13:56:28.908759 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.08s 2025-03-23 13:56:28.908775 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.26s 2025-03-23 13:56:28.908790 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 13.22s 2025-03-23 13:56:28.908805 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.51s 2025-03-23 13:56:28.908819 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.16s 2025-03-23 13:56:28.908833 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.57s 2025-03-23 13:56:28.908847 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.08s 2025-03-23 13:56:28.908860 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.00s 2025-03-23 13:56:28.908874 | orchestrator | nova : Copying over nova.conf for nova-api-bootstrap ------------------- 10.10s 2025-03-23 13:56:28.908887 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 9.56s 2025-03-23 13:56:28.908901 | orchestrator | 2025-03-23 13:56:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:28.908934 | orchestrator | 2025-03-23 13:56:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:31.954750 | orchestrator | 2025-03-23 13:56:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:31.954874 | orchestrator | 2025-03-23 13:56:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:35.002856 | orchestrator | 2025-03-23 13:56:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:35.002990 | orchestrator | 2025-03-23 13:56:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:38.049535 | orchestrator | 2025-03-23 13:56:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:38.049660 | orchestrator | 2025-03-23 13:56:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:41.103300 | orchestrator | 2025-03-23 13:56:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:41.103407 | orchestrator | 2025-03-23 13:56:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:44.141176 | orchestrator | 2025-03-23 13:56:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:44.141315 | orchestrator | 2025-03-23 13:56:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:47.182691 | orchestrator | 2025-03-23 13:56:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:47.182867 | orchestrator | 2025-03-23 13:56:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:50.234852 | orchestrator | 2025-03-23 13:56:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:50.234987 | orchestrator | 2025-03-23 13:56:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:53.276540 | orchestrator | 2025-03-23 13:56:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:53.276665 | orchestrator | 2025-03-23 13:56:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:56.323944 | orchestrator | 2025-03-23 13:56:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:56.324084 | orchestrator | 2025-03-23 13:56:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:56:59.366117 | orchestrator | 2025-03-23 13:56:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:56:59.366242 | orchestrator | 2025-03-23 13:56:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:02.410468 | orchestrator | 2025-03-23 13:56:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:02.410606 | orchestrator | 2025-03-23 13:57:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:05.460099 | orchestrator | 2025-03-23 13:57:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:05.460224 | orchestrator | 2025-03-23 13:57:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:08.509346 | orchestrator | 2025-03-23 13:57:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:08.509522 | orchestrator | 2025-03-23 13:57:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:11.545065 | orchestrator | 2025-03-23 13:57:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:11.545182 | orchestrator | 2025-03-23 13:57:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:14.589737 | orchestrator | 2025-03-23 13:57:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:14.589882 | orchestrator | 2025-03-23 13:57:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:17.634668 | orchestrator | 2025-03-23 13:57:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:17.634786 | orchestrator | 2025-03-23 13:57:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:20.673987 | orchestrator | 2025-03-23 13:57:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:20.674147 | orchestrator | 2025-03-23 13:57:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:23.718879 | orchestrator | 2025-03-23 13:57:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:23.719016 | orchestrator | 2025-03-23 13:57:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:26.756820 | orchestrator | 2025-03-23 13:57:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:26.756935 | orchestrator | 2025-03-23 13:57:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:29.804010 | orchestrator | 2025-03-23 13:57:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:29.804140 | orchestrator | 2025-03-23 13:57:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:32.850764 | orchestrator | 2025-03-23 13:57:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:32.850927 | orchestrator | 2025-03-23 13:57:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:35.900369 | orchestrator | 2025-03-23 13:57:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:35.900528 | orchestrator | 2025-03-23 13:57:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:38.953751 | orchestrator | 2025-03-23 13:57:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:38.953874 | orchestrator | 2025-03-23 13:57:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:42.010736 | orchestrator | 2025-03-23 13:57:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:42.010850 | orchestrator | 2025-03-23 13:57:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:45.060303 | orchestrator | 2025-03-23 13:57:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:45.060464 | orchestrator | 2025-03-23 13:57:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:48.103193 | orchestrator | 2025-03-23 13:57:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:48.103333 | orchestrator | 2025-03-23 13:57:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:51.144704 | orchestrator | 2025-03-23 13:57:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:51.144837 | orchestrator | 2025-03-23 13:57:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:54.183877 | orchestrator | 2025-03-23 13:57:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:54.184012 | orchestrator | 2025-03-23 13:57:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:57:57.221955 | orchestrator | 2025-03-23 13:57:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:57:57.222888 | orchestrator | 2025-03-23 13:57:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:00.274741 | orchestrator | 2025-03-23 13:57:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:00.274868 | orchestrator | 2025-03-23 13:58:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:03.330304 | orchestrator | 2025-03-23 13:58:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:03.330519 | orchestrator | 2025-03-23 13:58:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:06.378895 | orchestrator | 2025-03-23 13:58:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:06.379035 | orchestrator | 2025-03-23 13:58:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:09.428471 | orchestrator | 2025-03-23 13:58:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:09.428606 | orchestrator | 2025-03-23 13:58:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:12.467125 | orchestrator | 2025-03-23 13:58:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:12.467258 | orchestrator | 2025-03-23 13:58:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:15.515204 | orchestrator | 2025-03-23 13:58:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:15.515354 | orchestrator | 2025-03-23 13:58:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:18.556049 | orchestrator | 2025-03-23 13:58:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:18.556203 | orchestrator | 2025-03-23 13:58:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:21.593772 | orchestrator | 2025-03-23 13:58:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:21.594693 | orchestrator | 2025-03-23 13:58:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:24.640533 | orchestrator | 2025-03-23 13:58:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:24.640672 | orchestrator | 2025-03-23 13:58:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:27.689344 | orchestrator | 2025-03-23 13:58:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:27.689524 | orchestrator | 2025-03-23 13:58:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:30.734678 | orchestrator | 2025-03-23 13:58:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:30.734859 | orchestrator | 2025-03-23 13:58:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:33.784835 | orchestrator | 2025-03-23 13:58:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:33.784953 | orchestrator | 2025-03-23 13:58:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:36.825995 | orchestrator | 2025-03-23 13:58:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:36.826140 | orchestrator | 2025-03-23 13:58:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:39.870158 | orchestrator | 2025-03-23 13:58:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:39.870277 | orchestrator | 2025-03-23 13:58:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:42.916555 | orchestrator | 2025-03-23 13:58:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:42.916670 | orchestrator | 2025-03-23 13:58:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:45.966112 | orchestrator | 2025-03-23 13:58:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:45.966227 | orchestrator | 2025-03-23 13:58:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:49.021221 | orchestrator | 2025-03-23 13:58:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:49.021309 | orchestrator | 2025-03-23 13:58:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:52.073597 | orchestrator | 2025-03-23 13:58:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:52.073711 | orchestrator | 2025-03-23 13:58:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:55.116584 | orchestrator | 2025-03-23 13:58:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:55.116713 | orchestrator | 2025-03-23 13:58:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:58:58.164300 | orchestrator | 2025-03-23 13:58:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:58:58.164441 | orchestrator | 2025-03-23 13:58:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:01.212816 | orchestrator | 2025-03-23 13:58:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:01.212946 | orchestrator | 2025-03-23 13:59:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:04.265250 | orchestrator | 2025-03-23 13:59:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:04.265410 | orchestrator | 2025-03-23 13:59:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:07.307901 | orchestrator | 2025-03-23 13:59:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:07.308044 | orchestrator | 2025-03-23 13:59:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:10.366555 | orchestrator | 2025-03-23 13:59:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:10.366688 | orchestrator | 2025-03-23 13:59:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:13.417879 | orchestrator | 2025-03-23 13:59:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:13.418781 | orchestrator | 2025-03-23 13:59:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:16.462684 | orchestrator | 2025-03-23 13:59:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:16.462780 | orchestrator | 2025-03-23 13:59:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:19.505596 | orchestrator | 2025-03-23 13:59:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:19.505767 | orchestrator | 2025-03-23 13:59:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:22.554541 | orchestrator | 2025-03-23 13:59:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:22.554669 | orchestrator | 2025-03-23 13:59:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:25.611338 | orchestrator | 2025-03-23 13:59:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:25.611511 | orchestrator | 2025-03-23 13:59:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:25.612536 | orchestrator | 2025-03-23 13:59:25 | INFO  | Task 411a55ae-9d0e-4b3f-a4f5-2a0781369d4f is in state STARTED 2025-03-23 13:59:28.667854 | orchestrator | 2025-03-23 13:59:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:28.667975 | orchestrator | 2025-03-23 13:59:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:28.669972 | orchestrator | 2025-03-23 13:59:28 | INFO  | Task 411a55ae-9d0e-4b3f-a4f5-2a0781369d4f is in state STARTED 2025-03-23 13:59:31.725703 | orchestrator | 2025-03-23 13:59:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:31.725831 | orchestrator | 2025-03-23 13:59:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:31.726479 | orchestrator | 2025-03-23 13:59:31 | INFO  | Task 411a55ae-9d0e-4b3f-a4f5-2a0781369d4f is in state STARTED 2025-03-23 13:59:34.780244 | orchestrator | 2025-03-23 13:59:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:34.780374 | orchestrator | 2025-03-23 13:59:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:34.780708 | orchestrator | 2025-03-23 13:59:34 | INFO  | Task 411a55ae-9d0e-4b3f-a4f5-2a0781369d4f is in state STARTED 2025-03-23 13:59:37.832310 | orchestrator | 2025-03-23 13:59:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:37.832506 | orchestrator | 2025-03-23 13:59:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:37.833988 | orchestrator | 2025-03-23 13:59:37 | INFO  | Task 411a55ae-9d0e-4b3f-a4f5-2a0781369d4f is in state SUCCESS 2025-03-23 13:59:40.890663 | orchestrator | 2025-03-23 13:59:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:40.890827 | orchestrator | 2025-03-23 13:59:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:43.941148 | orchestrator | 2025-03-23 13:59:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:43.941269 | orchestrator | 2025-03-23 13:59:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:46.991806 | orchestrator | 2025-03-23 13:59:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:46.991937 | orchestrator | 2025-03-23 13:59:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:50.031951 | orchestrator | 2025-03-23 13:59:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:50.032057 | orchestrator | 2025-03-23 13:59:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:53.074755 | orchestrator | 2025-03-23 13:59:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:53.074889 | orchestrator | 2025-03-23 13:59:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:56.122858 | orchestrator | 2025-03-23 13:59:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:56.122992 | orchestrator | 2025-03-23 13:59:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 13:59:59.168298 | orchestrator | 2025-03-23 13:59:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 13:59:59.168472 | orchestrator | 2025-03-23 13:59:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:02.215246 | orchestrator | 2025-03-23 13:59:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:02.215375 | orchestrator | 2025-03-23 14:00:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:05.264198 | orchestrator | 2025-03-23 14:00:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:05.264277 | orchestrator | 2025-03-23 14:00:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:08.308654 | orchestrator | 2025-03-23 14:00:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:08.308714 | orchestrator | 2025-03-23 14:00:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:11.356552 | orchestrator | 2025-03-23 14:00:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:11.356608 | orchestrator | 2025-03-23 14:00:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:14.412777 | orchestrator | 2025-03-23 14:00:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:14.412906 | orchestrator | 2025-03-23 14:00:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:17.467627 | orchestrator | 2025-03-23 14:00:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:17.467755 | orchestrator | 2025-03-23 14:00:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:20.515723 | orchestrator | 2025-03-23 14:00:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:20.515839 | orchestrator | 2025-03-23 14:00:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:23.568685 | orchestrator | 2025-03-23 14:00:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:23.568818 | orchestrator | 2025-03-23 14:00:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:26.615469 | orchestrator | 2025-03-23 14:00:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:26.615580 | orchestrator | 2025-03-23 14:00:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:29.664764 | orchestrator | 2025-03-23 14:00:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:29.664995 | orchestrator | 2025-03-23 14:00:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:32.708160 | orchestrator | 2025-03-23 14:00:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:32.708275 | orchestrator | 2025-03-23 14:00:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:35.755758 | orchestrator | 2025-03-23 14:00:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:35.755868 | orchestrator | 2025-03-23 14:00:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:38.810808 | orchestrator | 2025-03-23 14:00:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:38.810935 | orchestrator | 2025-03-23 14:00:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:41.868805 | orchestrator | 2025-03-23 14:00:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:41.868931 | orchestrator | 2025-03-23 14:00:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:44.914484 | orchestrator | 2025-03-23 14:00:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:44.914626 | orchestrator | 2025-03-23 14:00:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:47.957656 | orchestrator | 2025-03-23 14:00:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:47.957783 | orchestrator | 2025-03-23 14:00:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:51.006741 | orchestrator | 2025-03-23 14:00:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:51.006890 | orchestrator | 2025-03-23 14:00:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:54.056296 | orchestrator | 2025-03-23 14:00:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:54.056488 | orchestrator | 2025-03-23 14:00:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:00:57.101563 | orchestrator | 2025-03-23 14:00:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:00:57.101691 | orchestrator | 2025-03-23 14:00:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:00.149200 | orchestrator | 2025-03-23 14:00:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:00.149298 | orchestrator | 2025-03-23 14:01:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:03.192965 | orchestrator | 2025-03-23 14:01:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:03.193080 | orchestrator | 2025-03-23 14:01:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:06.246394 | orchestrator | 2025-03-23 14:01:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:06.246549 | orchestrator | 2025-03-23 14:01:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:09.291898 | orchestrator | 2025-03-23 14:01:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:09.292039 | orchestrator | 2025-03-23 14:01:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:12.348190 | orchestrator | 2025-03-23 14:01:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:12.348362 | orchestrator | 2025-03-23 14:01:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:15.403114 | orchestrator | 2025-03-23 14:01:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:15.403250 | orchestrator | 2025-03-23 14:01:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:18.451503 | orchestrator | 2025-03-23 14:01:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:18.451636 | orchestrator | 2025-03-23 14:01:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:21.499724 | orchestrator | 2025-03-23 14:01:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:21.499857 | orchestrator | 2025-03-23 14:01:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:24.550582 | orchestrator | 2025-03-23 14:01:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:24.550722 | orchestrator | 2025-03-23 14:01:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:27.609156 | orchestrator | 2025-03-23 14:01:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:27.609291 | orchestrator | 2025-03-23 14:01:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:30.660983 | orchestrator | 2025-03-23 14:01:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:30.661120 | orchestrator | 2025-03-23 14:01:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:33.718396 | orchestrator | 2025-03-23 14:01:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:33.718551 | orchestrator | 2025-03-23 14:01:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:36.765644 | orchestrator | 2025-03-23 14:01:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:36.765769 | orchestrator | 2025-03-23 14:01:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:39.807506 | orchestrator | 2025-03-23 14:01:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:39.807654 | orchestrator | 2025-03-23 14:01:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:42.853714 | orchestrator | 2025-03-23 14:01:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:42.853868 | orchestrator | 2025-03-23 14:01:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:45.907132 | orchestrator | 2025-03-23 14:01:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:45.907265 | orchestrator | 2025-03-23 14:01:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:48.958225 | orchestrator | 2025-03-23 14:01:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:48.958358 | orchestrator | 2025-03-23 14:01:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:52.009290 | orchestrator | 2025-03-23 14:01:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:52.009479 | orchestrator | 2025-03-23 14:01:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:55.073827 | orchestrator | 2025-03-23 14:01:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:55.073964 | orchestrator | 2025-03-23 14:01:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:01:58.128016 | orchestrator | 2025-03-23 14:01:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:01:58.128157 | orchestrator | 2025-03-23 14:01:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:01.175741 | orchestrator | 2025-03-23 14:01:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:01.175882 | orchestrator | 2025-03-23 14:02:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:04.237285 | orchestrator | 2025-03-23 14:02:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:04.237464 | orchestrator | 2025-03-23 14:02:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:07.298401 | orchestrator | 2025-03-23 14:02:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:07.298592 | orchestrator | 2025-03-23 14:02:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:10.352049 | orchestrator | 2025-03-23 14:02:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:10.352188 | orchestrator | 2025-03-23 14:02:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:13.405486 | orchestrator | 2025-03-23 14:02:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:13.405623 | orchestrator | 2025-03-23 14:02:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:16.465812 | orchestrator | 2025-03-23 14:02:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:16.465953 | orchestrator | 2025-03-23 14:02:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:19.522001 | orchestrator | 2025-03-23 14:02:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:19.522190 | orchestrator | 2025-03-23 14:02:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:22.572322 | orchestrator | 2025-03-23 14:02:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:22.572505 | orchestrator | 2025-03-23 14:02:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:25.624725 | orchestrator | 2025-03-23 14:02:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:25.624846 | orchestrator | 2025-03-23 14:02:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:28.669626 | orchestrator | 2025-03-23 14:02:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:28.669767 | orchestrator | 2025-03-23 14:02:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:31.712442 | orchestrator | 2025-03-23 14:02:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:31.712575 | orchestrator | 2025-03-23 14:02:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:34.764684 | orchestrator | 2025-03-23 14:02:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:34.764831 | orchestrator | 2025-03-23 14:02:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:37.817761 | orchestrator | 2025-03-23 14:02:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:37.817897 | orchestrator | 2025-03-23 14:02:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:40.863880 | orchestrator | 2025-03-23 14:02:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:40.864019 | orchestrator | 2025-03-23 14:02:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:43.909244 | orchestrator | 2025-03-23 14:02:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:43.909463 | orchestrator | 2025-03-23 14:02:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:46.955590 | orchestrator | 2025-03-23 14:02:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:46.955727 | orchestrator | 2025-03-23 14:02:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:50.005996 | orchestrator | 2025-03-23 14:02:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:50.006176 | orchestrator | 2025-03-23 14:02:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:53.060538 | orchestrator | 2025-03-23 14:02:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:53.060687 | orchestrator | 2025-03-23 14:02:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:56.115002 | orchestrator | 2025-03-23 14:02:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:56.115139 | orchestrator | 2025-03-23 14:02:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:02:59.169779 | orchestrator | 2025-03-23 14:02:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:02:59.169950 | orchestrator | 2025-03-23 14:02:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:02.220262 | orchestrator | 2025-03-23 14:02:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:02.220431 | orchestrator | 2025-03-23 14:03:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:05.273747 | orchestrator | 2025-03-23 14:03:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:05.273886 | orchestrator | 2025-03-23 14:03:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:08.323884 | orchestrator | 2025-03-23 14:03:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:08.324015 | orchestrator | 2025-03-23 14:03:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:11.374485 | orchestrator | 2025-03-23 14:03:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:11.374616 | orchestrator | 2025-03-23 14:03:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:14.414277 | orchestrator | 2025-03-23 14:03:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:14.414494 | orchestrator | 2025-03-23 14:03:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:17.469931 | orchestrator | 2025-03-23 14:03:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:17.470108 | orchestrator | 2025-03-23 14:03:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:20.513924 | orchestrator | 2025-03-23 14:03:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:20.514036 | orchestrator | 2025-03-23 14:03:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:23.565342 | orchestrator | 2025-03-23 14:03:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:23.565514 | orchestrator | 2025-03-23 14:03:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:26.608148 | orchestrator | 2025-03-23 14:03:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:26.608276 | orchestrator | 2025-03-23 14:03:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:29.652676 | orchestrator | 2025-03-23 14:03:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:29.652852 | orchestrator | 2025-03-23 14:03:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:32.692680 | orchestrator | 2025-03-23 14:03:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:32.692816 | orchestrator | 2025-03-23 14:03:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:35.742585 | orchestrator | 2025-03-23 14:03:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:35.742720 | orchestrator | 2025-03-23 14:03:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:38.802579 | orchestrator | 2025-03-23 14:03:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:38.802701 | orchestrator | 2025-03-23 14:03:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:41.853204 | orchestrator | 2025-03-23 14:03:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:41.853336 | orchestrator | 2025-03-23 14:03:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:44.909804 | orchestrator | 2025-03-23 14:03:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:44.909939 | orchestrator | 2025-03-23 14:03:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:47.967461 | orchestrator | 2025-03-23 14:03:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:47.967619 | orchestrator | 2025-03-23 14:03:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:51.013095 | orchestrator | 2025-03-23 14:03:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:51.013221 | orchestrator | 2025-03-23 14:03:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:54.053104 | orchestrator | 2025-03-23 14:03:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:54.053251 | orchestrator | 2025-03-23 14:03:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:03:57.103559 | orchestrator | 2025-03-23 14:03:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:03:57.103674 | orchestrator | 2025-03-23 14:03:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:00.161963 | orchestrator | 2025-03-23 14:03:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:00.162156 | orchestrator | 2025-03-23 14:04:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:03.220574 | orchestrator | 2025-03-23 14:04:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:03.220704 | orchestrator | 2025-03-23 14:04:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:06.268056 | orchestrator | 2025-03-23 14:04:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:06.268190 | orchestrator | 2025-03-23 14:04:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:09.315815 | orchestrator | 2025-03-23 14:04:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:09.315930 | orchestrator | 2025-03-23 14:04:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:12.357474 | orchestrator | 2025-03-23 14:04:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:12.357618 | orchestrator | 2025-03-23 14:04:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:15.407855 | orchestrator | 2025-03-23 14:04:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:15.408007 | orchestrator | 2025-03-23 14:04:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:18.455176 | orchestrator | 2025-03-23 14:04:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:18.455301 | orchestrator | 2025-03-23 14:04:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:21.503219 | orchestrator | 2025-03-23 14:04:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:21.503349 | orchestrator | 2025-03-23 14:04:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:24.558852 | orchestrator | 2025-03-23 14:04:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:24.558988 | orchestrator | 2025-03-23 14:04:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:27.616583 | orchestrator | 2025-03-23 14:04:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:27.616709 | orchestrator | 2025-03-23 14:04:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:30.660755 | orchestrator | 2025-03-23 14:04:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:30.660886 | orchestrator | 2025-03-23 14:04:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:33.706270 | orchestrator | 2025-03-23 14:04:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:33.706395 | orchestrator | 2025-03-23 14:04:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:36.755976 | orchestrator | 2025-03-23 14:04:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:36.756105 | orchestrator | 2025-03-23 14:04:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:39.806738 | orchestrator | 2025-03-23 14:04:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:39.806857 | orchestrator | 2025-03-23 14:04:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:42.856494 | orchestrator | 2025-03-23 14:04:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:42.856599 | orchestrator | 2025-03-23 14:04:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:45.911064 | orchestrator | 2025-03-23 14:04:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:45.911187 | orchestrator | 2025-03-23 14:04:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:48.962508 | orchestrator | 2025-03-23 14:04:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:48.962640 | orchestrator | 2025-03-23 14:04:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:52.019736 | orchestrator | 2025-03-23 14:04:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:52.019861 | orchestrator | 2025-03-23 14:04:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:55.090493 | orchestrator | 2025-03-23 14:04:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:55.090643 | orchestrator | 2025-03-23 14:04:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:04:58.143752 | orchestrator | 2025-03-23 14:04:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:04:58.143880 | orchestrator | 2025-03-23 14:04:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:01.193289 | orchestrator | 2025-03-23 14:04:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:01.193503 | orchestrator | 2025-03-23 14:05:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:04.244031 | orchestrator | 2025-03-23 14:05:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:04.244154 | orchestrator | 2025-03-23 14:05:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:07.301905 | orchestrator | 2025-03-23 14:05:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:07.302093 | orchestrator | 2025-03-23 14:05:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:10.357007 | orchestrator | 2025-03-23 14:05:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:10.357115 | orchestrator | 2025-03-23 14:05:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:13.404282 | orchestrator | 2025-03-23 14:05:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:13.404462 | orchestrator | 2025-03-23 14:05:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:16.454693 | orchestrator | 2025-03-23 14:05:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:16.454818 | orchestrator | 2025-03-23 14:05:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:19.500158 | orchestrator | 2025-03-23 14:05:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:19.500305 | orchestrator | 2025-03-23 14:05:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:22.551260 | orchestrator | 2025-03-23 14:05:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:22.551455 | orchestrator | 2025-03-23 14:05:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:25.611323 | orchestrator | 2025-03-23 14:05:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:25.611517 | orchestrator | 2025-03-23 14:05:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:28.655057 | orchestrator | 2025-03-23 14:05:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:28.655190 | orchestrator | 2025-03-23 14:05:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:31.706990 | orchestrator | 2025-03-23 14:05:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:31.707135 | orchestrator | 2025-03-23 14:05:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:34.760921 | orchestrator | 2025-03-23 14:05:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:34.761109 | orchestrator | 2025-03-23 14:05:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:37.809348 | orchestrator | 2025-03-23 14:05:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:37.809523 | orchestrator | 2025-03-23 14:05:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:40.862110 | orchestrator | 2025-03-23 14:05:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:40.862243 | orchestrator | 2025-03-23 14:05:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:43.913689 | orchestrator | 2025-03-23 14:05:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:43.913778 | orchestrator | 2025-03-23 14:05:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:46.962291 | orchestrator | 2025-03-23 14:05:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:46.962484 | orchestrator | 2025-03-23 14:05:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:50.022350 | orchestrator | 2025-03-23 14:05:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:50.022517 | orchestrator | 2025-03-23 14:05:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:53.072277 | orchestrator | 2025-03-23 14:05:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:53.072449 | orchestrator | 2025-03-23 14:05:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:56.120290 | orchestrator | 2025-03-23 14:05:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:56.120432 | orchestrator | 2025-03-23 14:05:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:05:59.178742 | orchestrator | 2025-03-23 14:05:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:05:59.178871 | orchestrator | 2025-03-23 14:05:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:02.232464 | orchestrator | 2025-03-23 14:05:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:02.232593 | orchestrator | 2025-03-23 14:06:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:05.286720 | orchestrator | 2025-03-23 14:06:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:05.286842 | orchestrator | 2025-03-23 14:06:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:08.350380 | orchestrator | 2025-03-23 14:06:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:08.350550 | orchestrator | 2025-03-23 14:06:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:11.403179 | orchestrator | 2025-03-23 14:06:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:11.403298 | orchestrator | 2025-03-23 14:06:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:14.462172 | orchestrator | 2025-03-23 14:06:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:14.462303 | orchestrator | 2025-03-23 14:06:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:17.514203 | orchestrator | 2025-03-23 14:06:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:17.514338 | orchestrator | 2025-03-23 14:06:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:20.562944 | orchestrator | 2025-03-23 14:06:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:20.563035 | orchestrator | 2025-03-23 14:06:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:23.617817 | orchestrator | 2025-03-23 14:06:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:23.617940 | orchestrator | 2025-03-23 14:06:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:26.668643 | orchestrator | 2025-03-23 14:06:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:26.668783 | orchestrator | 2025-03-23 14:06:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:29.720769 | orchestrator | 2025-03-23 14:06:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:29.720935 | orchestrator | 2025-03-23 14:06:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:32.768127 | orchestrator | 2025-03-23 14:06:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:32.768256 | orchestrator | 2025-03-23 14:06:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:35.825815 | orchestrator | 2025-03-23 14:06:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:35.825948 | orchestrator | 2025-03-23 14:06:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:38.883593 | orchestrator | 2025-03-23 14:06:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:38.883746 | orchestrator | 2025-03-23 14:06:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:41.931886 | orchestrator | 2025-03-23 14:06:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:41.932008 | orchestrator | 2025-03-23 14:06:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:44.984871 | orchestrator | 2025-03-23 14:06:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:44.985006 | orchestrator | 2025-03-23 14:06:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:48.032723 | orchestrator | 2025-03-23 14:06:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:48.032850 | orchestrator | 2025-03-23 14:06:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:51.087211 | orchestrator | 2025-03-23 14:06:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:51.087342 | orchestrator | 2025-03-23 14:06:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:54.138785 | orchestrator | 2025-03-23 14:06:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:54.138903 | orchestrator | 2025-03-23 14:06:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:06:57.186501 | orchestrator | 2025-03-23 14:06:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:06:57.186632 | orchestrator | 2025-03-23 14:06:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:00.227854 | orchestrator | 2025-03-23 14:06:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:00.227993 | orchestrator | 2025-03-23 14:07:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:03.283357 | orchestrator | 2025-03-23 14:07:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:03.283555 | orchestrator | 2025-03-23 14:07:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:06.332657 | orchestrator | 2025-03-23 14:07:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:06.332741 | orchestrator | 2025-03-23 14:07:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:09.385880 | orchestrator | 2025-03-23 14:07:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:09.385991 | orchestrator | 2025-03-23 14:07:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:12.438691 | orchestrator | 2025-03-23 14:07:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:12.438820 | orchestrator | 2025-03-23 14:07:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:15.489612 | orchestrator | 2025-03-23 14:07:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:15.489727 | orchestrator | 2025-03-23 14:07:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:18.533468 | orchestrator | 2025-03-23 14:07:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:18.533583 | orchestrator | 2025-03-23 14:07:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:21.574829 | orchestrator | 2025-03-23 14:07:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:21.574957 | orchestrator | 2025-03-23 14:07:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:24.624738 | orchestrator | 2025-03-23 14:07:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:24.624868 | orchestrator | 2025-03-23 14:07:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:27.678469 | orchestrator | 2025-03-23 14:07:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:27.678612 | orchestrator | 2025-03-23 14:07:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:30.730457 | orchestrator | 2025-03-23 14:07:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:30.730592 | orchestrator | 2025-03-23 14:07:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:33.778266 | orchestrator | 2025-03-23 14:07:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:33.778459 | orchestrator | 2025-03-23 14:07:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:36.816113 | orchestrator | 2025-03-23 14:07:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:36.816254 | orchestrator | 2025-03-23 14:07:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:39.866515 | orchestrator | 2025-03-23 14:07:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:39.866652 | orchestrator | 2025-03-23 14:07:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:42.904982 | orchestrator | 2025-03-23 14:07:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:42.905111 | orchestrator | 2025-03-23 14:07:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:45.948439 | orchestrator | 2025-03-23 14:07:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:45.948579 | orchestrator | 2025-03-23 14:07:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:48.995082 | orchestrator | 2025-03-23 14:07:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:48.995216 | orchestrator | 2025-03-23 14:07:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:52.046442 | orchestrator | 2025-03-23 14:07:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:52.046581 | orchestrator | 2025-03-23 14:07:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:55.090233 | orchestrator | 2025-03-23 14:07:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:55.090364 | orchestrator | 2025-03-23 14:07:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:07:58.140503 | orchestrator | 2025-03-23 14:07:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:07:58.140632 | orchestrator | 2025-03-23 14:07:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:01.191818 | orchestrator | 2025-03-23 14:07:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:01.191958 | orchestrator | 2025-03-23 14:08:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:04.237625 | orchestrator | 2025-03-23 14:08:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:04.237753 | orchestrator | 2025-03-23 14:08:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:07.295677 | orchestrator | 2025-03-23 14:08:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:07.295784 | orchestrator | 2025-03-23 14:08:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:10.350631 | orchestrator | 2025-03-23 14:08:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:10.350773 | orchestrator | 2025-03-23 14:08:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:13.403193 | orchestrator | 2025-03-23 14:08:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:13.403333 | orchestrator | 2025-03-23 14:08:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:16.459325 | orchestrator | 2025-03-23 14:08:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:16.459500 | orchestrator | 2025-03-23 14:08:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:19.515297 | orchestrator | 2025-03-23 14:08:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:19.515477 | orchestrator | 2025-03-23 14:08:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:22.554765 | orchestrator | 2025-03-23 14:08:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:22.554892 | orchestrator | 2025-03-23 14:08:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:25.609287 | orchestrator | 2025-03-23 14:08:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:25.609455 | orchestrator | 2025-03-23 14:08:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:28.659422 | orchestrator | 2025-03-23 14:08:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:28.659561 | orchestrator | 2025-03-23 14:08:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:31.709736 | orchestrator | 2025-03-23 14:08:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:31.709864 | orchestrator | 2025-03-23 14:08:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:34.771719 | orchestrator | 2025-03-23 14:08:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:34.771850 | orchestrator | 2025-03-23 14:08:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:34.772347 | orchestrator | 2025-03-23 14:08:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:37.826463 | orchestrator | 2025-03-23 14:08:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:40.878794 | orchestrator | 2025-03-23 14:08:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:40.878933 | orchestrator | 2025-03-23 14:08:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:43.928738 | orchestrator | 2025-03-23 14:08:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:43.928873 | orchestrator | 2025-03-23 14:08:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:46.981642 | orchestrator | 2025-03-23 14:08:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:46.981771 | orchestrator | 2025-03-23 14:08:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:50.037827 | orchestrator | 2025-03-23 14:08:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:50.037963 | orchestrator | 2025-03-23 14:08:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:53.086910 | orchestrator | 2025-03-23 14:08:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:53.087086 | orchestrator | 2025-03-23 14:08:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:56.141833 | orchestrator | 2025-03-23 14:08:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:56.141971 | orchestrator | 2025-03-23 14:08:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:08:59.189681 | orchestrator | 2025-03-23 14:08:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:08:59.189814 | orchestrator | 2025-03-23 14:08:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:02.239746 | orchestrator | 2025-03-23 14:08:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:02.239887 | orchestrator | 2025-03-23 14:09:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:05.303800 | orchestrator | 2025-03-23 14:09:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:05.303936 | orchestrator | 2025-03-23 14:09:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:08.357946 | orchestrator | 2025-03-23 14:09:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:08.358113 | orchestrator | 2025-03-23 14:09:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:11.407099 | orchestrator | 2025-03-23 14:09:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:11.407195 | orchestrator | 2025-03-23 14:09:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:14.450627 | orchestrator | 2025-03-23 14:09:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:14.450753 | orchestrator | 2025-03-23 14:09:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:17.503200 | orchestrator | 2025-03-23 14:09:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:17.503339 | orchestrator | 2025-03-23 14:09:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:20.565536 | orchestrator | 2025-03-23 14:09:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:20.565654 | orchestrator | 2025-03-23 14:09:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:23.619113 | orchestrator | 2025-03-23 14:09:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:23.619249 | orchestrator | 2025-03-23 14:09:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:26.679298 | orchestrator | 2025-03-23 14:09:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:26.679490 | orchestrator | 2025-03-23 14:09:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:26.680299 | orchestrator | 2025-03-23 14:09:26 | INFO  | Task be6b545a-5b10-40d1-ae6a-0a808ea51711 is in state STARTED 2025-03-23 14:09:26.680526 | orchestrator | 2025-03-23 14:09:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:29.732746 | orchestrator | 2025-03-23 14:09:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:29.733659 | orchestrator | 2025-03-23 14:09:29 | INFO  | Task be6b545a-5b10-40d1-ae6a-0a808ea51711 is in state STARTED 2025-03-23 14:09:32.789161 | orchestrator | 2025-03-23 14:09:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:32.789299 | orchestrator | 2025-03-23 14:09:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:32.790537 | orchestrator | 2025-03-23 14:09:32 | INFO  | Task be6b545a-5b10-40d1-ae6a-0a808ea51711 is in state STARTED 2025-03-23 14:09:35.844459 | orchestrator | 2025-03-23 14:09:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:35.844602 | orchestrator | 2025-03-23 14:09:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:35.844753 | orchestrator | 2025-03-23 14:09:35 | INFO  | Task be6b545a-5b10-40d1-ae6a-0a808ea51711 is in state SUCCESS 2025-03-23 14:09:35.844996 | orchestrator | 2025-03-23 14:09:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:38.897545 | orchestrator | 2025-03-23 14:09:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:41.954223 | orchestrator | 2025-03-23 14:09:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:41.954444 | orchestrator | 2025-03-23 14:09:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:45.001818 | orchestrator | 2025-03-23 14:09:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:45.001934 | orchestrator | 2025-03-23 14:09:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:48.053156 | orchestrator | 2025-03-23 14:09:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:48.053294 | orchestrator | 2025-03-23 14:09:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:51.101036 | orchestrator | 2025-03-23 14:09:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:51.101175 | orchestrator | 2025-03-23 14:09:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:54.147153 | orchestrator | 2025-03-23 14:09:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:54.147283 | orchestrator | 2025-03-23 14:09:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:09:57.201864 | orchestrator | 2025-03-23 14:09:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:09:57.201987 | orchestrator | 2025-03-23 14:09:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:00.268661 | orchestrator | 2025-03-23 14:09:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:00.268807 | orchestrator | 2025-03-23 14:10:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:03.315879 | orchestrator | 2025-03-23 14:10:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:03.316025 | orchestrator | 2025-03-23 14:10:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:06.368554 | orchestrator | 2025-03-23 14:10:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:06.368692 | orchestrator | 2025-03-23 14:10:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:09.408079 | orchestrator | 2025-03-23 14:10:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:09.408207 | orchestrator | 2025-03-23 14:10:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:12.469635 | orchestrator | 2025-03-23 14:10:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:12.469769 | orchestrator | 2025-03-23 14:10:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:15.516776 | orchestrator | 2025-03-23 14:10:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:15.516908 | orchestrator | 2025-03-23 14:10:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:18.564557 | orchestrator | 2025-03-23 14:10:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:18.564692 | orchestrator | 2025-03-23 14:10:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:21.620949 | orchestrator | 2025-03-23 14:10:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:21.621064 | orchestrator | 2025-03-23 14:10:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:24.669070 | orchestrator | 2025-03-23 14:10:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:24.669197 | orchestrator | 2025-03-23 14:10:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:27.718977 | orchestrator | 2025-03-23 14:10:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:27.719115 | orchestrator | 2025-03-23 14:10:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:30.761331 | orchestrator | 2025-03-23 14:10:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:30.761485 | orchestrator | 2025-03-23 14:10:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:33.818834 | orchestrator | 2025-03-23 14:10:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:33.818998 | orchestrator | 2025-03-23 14:10:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:36.865560 | orchestrator | 2025-03-23 14:10:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:36.865701 | orchestrator | 2025-03-23 14:10:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:39.910158 | orchestrator | 2025-03-23 14:10:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:39.910292 | orchestrator | 2025-03-23 14:10:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:42.962757 | orchestrator | 2025-03-23 14:10:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:42.962877 | orchestrator | 2025-03-23 14:10:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:46.006664 | orchestrator | 2025-03-23 14:10:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:46.006790 | orchestrator | 2025-03-23 14:10:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:49.063784 | orchestrator | 2025-03-23 14:10:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:49.063907 | orchestrator | 2025-03-23 14:10:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:52.120591 | orchestrator | 2025-03-23 14:10:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:52.120724 | orchestrator | 2025-03-23 14:10:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:55.176703 | orchestrator | 2025-03-23 14:10:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:55.176822 | orchestrator | 2025-03-23 14:10:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:10:58.229791 | orchestrator | 2025-03-23 14:10:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:10:58.229921 | orchestrator | 2025-03-23 14:10:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:01.288220 | orchestrator | 2025-03-23 14:10:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:01.288355 | orchestrator | 2025-03-23 14:11:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:01.288529 | orchestrator | 2025-03-23 14:11:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:04.341928 | orchestrator | 2025-03-23 14:11:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:07.394825 | orchestrator | 2025-03-23 14:11:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:07.394964 | orchestrator | 2025-03-23 14:11:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:10.438572 | orchestrator | 2025-03-23 14:11:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:10.438714 | orchestrator | 2025-03-23 14:11:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:13.484941 | orchestrator | 2025-03-23 14:11:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:13.485076 | orchestrator | 2025-03-23 14:11:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:16.537611 | orchestrator | 2025-03-23 14:11:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:16.537726 | orchestrator | 2025-03-23 14:11:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:19.576613 | orchestrator | 2025-03-23 14:11:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:19.576752 | orchestrator | 2025-03-23 14:11:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:22.614616 | orchestrator | 2025-03-23 14:11:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:22.614772 | orchestrator | 2025-03-23 14:11:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:25.668315 | orchestrator | 2025-03-23 14:11:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:25.668473 | orchestrator | 2025-03-23 14:11:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:28.723353 | orchestrator | 2025-03-23 14:11:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:28.723496 | orchestrator | 2025-03-23 14:11:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:31.773632 | orchestrator | 2025-03-23 14:11:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:31.773770 | orchestrator | 2025-03-23 14:11:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:34.823096 | orchestrator | 2025-03-23 14:11:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:34.823221 | orchestrator | 2025-03-23 14:11:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:37.879362 | orchestrator | 2025-03-23 14:11:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:37.879539 | orchestrator | 2025-03-23 14:11:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:40.931341 | orchestrator | 2025-03-23 14:11:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:40.931546 | orchestrator | 2025-03-23 14:11:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:43.985457 | orchestrator | 2025-03-23 14:11:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:43.985604 | orchestrator | 2025-03-23 14:11:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:47.038770 | orchestrator | 2025-03-23 14:11:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:47.038902 | orchestrator | 2025-03-23 14:11:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:50.083709 | orchestrator | 2025-03-23 14:11:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:50.083836 | orchestrator | 2025-03-23 14:11:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:53.129369 | orchestrator | 2025-03-23 14:11:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:53.129545 | orchestrator | 2025-03-23 14:11:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:56.176995 | orchestrator | 2025-03-23 14:11:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:56.177139 | orchestrator | 2025-03-23 14:11:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:11:59.228898 | orchestrator | 2025-03-23 14:11:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:11:59.229029 | orchestrator | 2025-03-23 14:11:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:02.280870 | orchestrator | 2025-03-23 14:11:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:02.281001 | orchestrator | 2025-03-23 14:12:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:05.330572 | orchestrator | 2025-03-23 14:12:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:05.330706 | orchestrator | 2025-03-23 14:12:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:08.375824 | orchestrator | 2025-03-23 14:12:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:08.375950 | orchestrator | 2025-03-23 14:12:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:11.436755 | orchestrator | 2025-03-23 14:12:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:11.436883 | orchestrator | 2025-03-23 14:12:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:14.483973 | orchestrator | 2025-03-23 14:12:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:14.484115 | orchestrator | 2025-03-23 14:12:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:17.533361 | orchestrator | 2025-03-23 14:12:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:17.533549 | orchestrator | 2025-03-23 14:12:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:20.585279 | orchestrator | 2025-03-23 14:12:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:20.585461 | orchestrator | 2025-03-23 14:12:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:23.642446 | orchestrator | 2025-03-23 14:12:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:23.642597 | orchestrator | 2025-03-23 14:12:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:26.685528 | orchestrator | 2025-03-23 14:12:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:26.685666 | orchestrator | 2025-03-23 14:12:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:29.728612 | orchestrator | 2025-03-23 14:12:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:29.728778 | orchestrator | 2025-03-23 14:12:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:32.781698 | orchestrator | 2025-03-23 14:12:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:32.781831 | orchestrator | 2025-03-23 14:12:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:35.833811 | orchestrator | 2025-03-23 14:12:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:35.833948 | orchestrator | 2025-03-23 14:12:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:38.882638 | orchestrator | 2025-03-23 14:12:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:38.882769 | orchestrator | 2025-03-23 14:12:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:41.937305 | orchestrator | 2025-03-23 14:12:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:41.937492 | orchestrator | 2025-03-23 14:12:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:44.989836 | orchestrator | 2025-03-23 14:12:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:44.989978 | orchestrator | 2025-03-23 14:12:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:48.040449 | orchestrator | 2025-03-23 14:12:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:48.040587 | orchestrator | 2025-03-23 14:12:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:51.096699 | orchestrator | 2025-03-23 14:12:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:51.096827 | orchestrator | 2025-03-23 14:12:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:54.141502 | orchestrator | 2025-03-23 14:12:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:54.141640 | orchestrator | 2025-03-23 14:12:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:12:57.187044 | orchestrator | 2025-03-23 14:12:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:12:57.187181 | orchestrator | 2025-03-23 14:12:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:00.234294 | orchestrator | 2025-03-23 14:12:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:00.234492 | orchestrator | 2025-03-23 14:13:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:03.286329 | orchestrator | 2025-03-23 14:13:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:03.286514 | orchestrator | 2025-03-23 14:13:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:06.330713 | orchestrator | 2025-03-23 14:13:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:06.330856 | orchestrator | 2025-03-23 14:13:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:09.380507 | orchestrator | 2025-03-23 14:13:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:09.380661 | orchestrator | 2025-03-23 14:13:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:12.436417 | orchestrator | 2025-03-23 14:13:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:12.436550 | orchestrator | 2025-03-23 14:13:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:15.482774 | orchestrator | 2025-03-23 14:13:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:15.482945 | orchestrator | 2025-03-23 14:13:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:18.532699 | orchestrator | 2025-03-23 14:13:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:18.532841 | orchestrator | 2025-03-23 14:13:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:21.579798 | orchestrator | 2025-03-23 14:13:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:21.579938 | orchestrator | 2025-03-23 14:13:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:24.632576 | orchestrator | 2025-03-23 14:13:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:24.632707 | orchestrator | 2025-03-23 14:13:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:27.685155 | orchestrator | 2025-03-23 14:13:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:27.685289 | orchestrator | 2025-03-23 14:13:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:30.726911 | orchestrator | 2025-03-23 14:13:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:30.727046 | orchestrator | 2025-03-23 14:13:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:33.777348 | orchestrator | 2025-03-23 14:13:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:33.777508 | orchestrator | 2025-03-23 14:13:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:36.829948 | orchestrator | 2025-03-23 14:13:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:36.830125 | orchestrator | 2025-03-23 14:13:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:39.882757 | orchestrator | 2025-03-23 14:13:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:39.882876 | orchestrator | 2025-03-23 14:13:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:42.933128 | orchestrator | 2025-03-23 14:13:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:42.933272 | orchestrator | 2025-03-23 14:13:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:45.977475 | orchestrator | 2025-03-23 14:13:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:45.977595 | orchestrator | 2025-03-23 14:13:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:49.026244 | orchestrator | 2025-03-23 14:13:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:49.026431 | orchestrator | 2025-03-23 14:13:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:52.076727 | orchestrator | 2025-03-23 14:13:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:52.076861 | orchestrator | 2025-03-23 14:13:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:55.119892 | orchestrator | 2025-03-23 14:13:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:55.120035 | orchestrator | 2025-03-23 14:13:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:13:58.176627 | orchestrator | 2025-03-23 14:13:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:13:58.176761 | orchestrator | 2025-03-23 14:13:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:01.228486 | orchestrator | 2025-03-23 14:13:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:01.228652 | orchestrator | 2025-03-23 14:14:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:04.282957 | orchestrator | 2025-03-23 14:14:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:04.283098 | orchestrator | 2025-03-23 14:14:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:07.348815 | orchestrator | 2025-03-23 14:14:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:07.348939 | orchestrator | 2025-03-23 14:14:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:10.388989 | orchestrator | 2025-03-23 14:14:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:10.389114 | orchestrator | 2025-03-23 14:14:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:13.435783 | orchestrator | 2025-03-23 14:14:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:13.435921 | orchestrator | 2025-03-23 14:14:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:16.485897 | orchestrator | 2025-03-23 14:14:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:16.486088 | orchestrator | 2025-03-23 14:14:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:19.527205 | orchestrator | 2025-03-23 14:14:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:19.527346 | orchestrator | 2025-03-23 14:14:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:22.579719 | orchestrator | 2025-03-23 14:14:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:22.579851 | orchestrator | 2025-03-23 14:14:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:25.631281 | orchestrator | 2025-03-23 14:14:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:25.631497 | orchestrator | 2025-03-23 14:14:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:28.687868 | orchestrator | 2025-03-23 14:14:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:28.688003 | orchestrator | 2025-03-23 14:14:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:31.741209 | orchestrator | 2025-03-23 14:14:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:31.741344 | orchestrator | 2025-03-23 14:14:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:34.797197 | orchestrator | 2025-03-23 14:14:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:34.797332 | orchestrator | 2025-03-23 14:14:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:37.845652 | orchestrator | 2025-03-23 14:14:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:37.845787 | orchestrator | 2025-03-23 14:14:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:40.896339 | orchestrator | 2025-03-23 14:14:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:40.896481 | orchestrator | 2025-03-23 14:14:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:43.947199 | orchestrator | 2025-03-23 14:14:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:43.947312 | orchestrator | 2025-03-23 14:14:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:46.999231 | orchestrator | 2025-03-23 14:14:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:46.999431 | orchestrator | 2025-03-23 14:14:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:50.048737 | orchestrator | 2025-03-23 14:14:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:50.048881 | orchestrator | 2025-03-23 14:14:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:53.101683 | orchestrator | 2025-03-23 14:14:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:53.101826 | orchestrator | 2025-03-23 14:14:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:56.157260 | orchestrator | 2025-03-23 14:14:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:56.157430 | orchestrator | 2025-03-23 14:14:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:14:59.216628 | orchestrator | 2025-03-23 14:14:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:14:59.216772 | orchestrator | 2025-03-23 14:14:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:02.273589 | orchestrator | 2025-03-23 14:14:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:02.273723 | orchestrator | 2025-03-23 14:15:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:05.324032 | orchestrator | 2025-03-23 14:15:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:05.324164 | orchestrator | 2025-03-23 14:15:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:08.375041 | orchestrator | 2025-03-23 14:15:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:08.375168 | orchestrator | 2025-03-23 14:15:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:11.424113 | orchestrator | 2025-03-23 14:15:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:11.424227 | orchestrator | 2025-03-23 14:15:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:14.474785 | orchestrator | 2025-03-23 14:15:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:14.474918 | orchestrator | 2025-03-23 14:15:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:17.515483 | orchestrator | 2025-03-23 14:15:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:17.515607 | orchestrator | 2025-03-23 14:15:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:20.560593 | orchestrator | 2025-03-23 14:15:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:20.560731 | orchestrator | 2025-03-23 14:15:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:23.613020 | orchestrator | 2025-03-23 14:15:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:23.613162 | orchestrator | 2025-03-23 14:15:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:26.663809 | orchestrator | 2025-03-23 14:15:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:26.663952 | orchestrator | 2025-03-23 14:15:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:29.707536 | orchestrator | 2025-03-23 14:15:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:29.707675 | orchestrator | 2025-03-23 14:15:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:32.753909 | orchestrator | 2025-03-23 14:15:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:32.754131 | orchestrator | 2025-03-23 14:15:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:35.813900 | orchestrator | 2025-03-23 14:15:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:35.814077 | orchestrator | 2025-03-23 14:15:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:38.867945 | orchestrator | 2025-03-23 14:15:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:38.868071 | orchestrator | 2025-03-23 14:15:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:41.921641 | orchestrator | 2025-03-23 14:15:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:41.921775 | orchestrator | 2025-03-23 14:15:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:44.968570 | orchestrator | 2025-03-23 14:15:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:44.968733 | orchestrator | 2025-03-23 14:15:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:48.021680 | orchestrator | 2025-03-23 14:15:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:48.021809 | orchestrator | 2025-03-23 14:15:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:51.073769 | orchestrator | 2025-03-23 14:15:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:51.073903 | orchestrator | 2025-03-23 14:15:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:54.122147 | orchestrator | 2025-03-23 14:15:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:54.122275 | orchestrator | 2025-03-23 14:15:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:15:57.169130 | orchestrator | 2025-03-23 14:15:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:15:57.169255 | orchestrator | 2025-03-23 14:15:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:00.219624 | orchestrator | 2025-03-23 14:15:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:00.219768 | orchestrator | 2025-03-23 14:16:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:03.267657 | orchestrator | 2025-03-23 14:16:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:03.267772 | orchestrator | 2025-03-23 14:16:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:06.321200 | orchestrator | 2025-03-23 14:16:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:06.321328 | orchestrator | 2025-03-23 14:16:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:09.361990 | orchestrator | 2025-03-23 14:16:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:09.362180 | orchestrator | 2025-03-23 14:16:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:12.409610 | orchestrator | 2025-03-23 14:16:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:12.409747 | orchestrator | 2025-03-23 14:16:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:15.457563 | orchestrator | 2025-03-23 14:16:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:15.457710 | orchestrator | 2025-03-23 14:16:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:18.511529 | orchestrator | 2025-03-23 14:16:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:18.511692 | orchestrator | 2025-03-23 14:16:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:21.561304 | orchestrator | 2025-03-23 14:16:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:21.561501 | orchestrator | 2025-03-23 14:16:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:24.609837 | orchestrator | 2025-03-23 14:16:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:24.609977 | orchestrator | 2025-03-23 14:16:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:27.661527 | orchestrator | 2025-03-23 14:16:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:27.661665 | orchestrator | 2025-03-23 14:16:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:30.716916 | orchestrator | 2025-03-23 14:16:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:30.717058 | orchestrator | 2025-03-23 14:16:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:33.773940 | orchestrator | 2025-03-23 14:16:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:33.774138 | orchestrator | 2025-03-23 14:16:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:36.824945 | orchestrator | 2025-03-23 14:16:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:36.825082 | orchestrator | 2025-03-23 14:16:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:39.877509 | orchestrator | 2025-03-23 14:16:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:39.877646 | orchestrator | 2025-03-23 14:16:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:42.916704 | orchestrator | 2025-03-23 14:16:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:42.916842 | orchestrator | 2025-03-23 14:16:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:45.965160 | orchestrator | 2025-03-23 14:16:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:45.965290 | orchestrator | 2025-03-23 14:16:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:49.019795 | orchestrator | 2025-03-23 14:16:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:49.019934 | orchestrator | 2025-03-23 14:16:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:52.067806 | orchestrator | 2025-03-23 14:16:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:52.067936 | orchestrator | 2025-03-23 14:16:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:55.116748 | orchestrator | 2025-03-23 14:16:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:55.116885 | orchestrator | 2025-03-23 14:16:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:16:58.172734 | orchestrator | 2025-03-23 14:16:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:16:58.172873 | orchestrator | 2025-03-23 14:16:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:01.227675 | orchestrator | 2025-03-23 14:16:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:01.227815 | orchestrator | 2025-03-23 14:17:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:04.287566 | orchestrator | 2025-03-23 14:17:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:04.287739 | orchestrator | 2025-03-23 14:17:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:07.346467 | orchestrator | 2025-03-23 14:17:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:07.346601 | orchestrator | 2025-03-23 14:17:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:10.400831 | orchestrator | 2025-03-23 14:17:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:10.400974 | orchestrator | 2025-03-23 14:17:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:13.450917 | orchestrator | 2025-03-23 14:17:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:13.451056 | orchestrator | 2025-03-23 14:17:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:16.501534 | orchestrator | 2025-03-23 14:17:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:16.501674 | orchestrator | 2025-03-23 14:17:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:19.560032 | orchestrator | 2025-03-23 14:17:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:19.560176 | orchestrator | 2025-03-23 14:17:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:22.604973 | orchestrator | 2025-03-23 14:17:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:22.605105 | orchestrator | 2025-03-23 14:17:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:25.662214 | orchestrator | 2025-03-23 14:17:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:25.662358 | orchestrator | 2025-03-23 14:17:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:28.714468 | orchestrator | 2025-03-23 14:17:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:28.714627 | orchestrator | 2025-03-23 14:17:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:31.765016 | orchestrator | 2025-03-23 14:17:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:31.765155 | orchestrator | 2025-03-23 14:17:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:34.814835 | orchestrator | 2025-03-23 14:17:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:34.814967 | orchestrator | 2025-03-23 14:17:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:37.865984 | orchestrator | 2025-03-23 14:17:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:37.866160 | orchestrator | 2025-03-23 14:17:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:40.910217 | orchestrator | 2025-03-23 14:17:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:40.910435 | orchestrator | 2025-03-23 14:17:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:43.961776 | orchestrator | 2025-03-23 14:17:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:43.961933 | orchestrator | 2025-03-23 14:17:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:47.010248 | orchestrator | 2025-03-23 14:17:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:47.010429 | orchestrator | 2025-03-23 14:17:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:50.060640 | orchestrator | 2025-03-23 14:17:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:50.060795 | orchestrator | 2025-03-23 14:17:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:53.106709 | orchestrator | 2025-03-23 14:17:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:53.106821 | orchestrator | 2025-03-23 14:17:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:56.146536 | orchestrator | 2025-03-23 14:17:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:56.146681 | orchestrator | 2025-03-23 14:17:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:17:59.185338 | orchestrator | 2025-03-23 14:17:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:17:59.185509 | orchestrator | 2025-03-23 14:17:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:02.230967 | orchestrator | 2025-03-23 14:17:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:02.231108 | orchestrator | 2025-03-23 14:18:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:05.286276 | orchestrator | 2025-03-23 14:18:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:05.286477 | orchestrator | 2025-03-23 14:18:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:08.342956 | orchestrator | 2025-03-23 14:18:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:08.343077 | orchestrator | 2025-03-23 14:18:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:11.381959 | orchestrator | 2025-03-23 14:18:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:11.382148 | orchestrator | 2025-03-23 14:18:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:14.440403 | orchestrator | 2025-03-23 14:18:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:14.440549 | orchestrator | 2025-03-23 14:18:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:17.495560 | orchestrator | 2025-03-23 14:18:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:17.496321 | orchestrator | 2025-03-23 14:18:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:20.546724 | orchestrator | 2025-03-23 14:18:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:20.546884 | orchestrator | 2025-03-23 14:18:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:23.611472 | orchestrator | 2025-03-23 14:18:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:23.611608 | orchestrator | 2025-03-23 14:18:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:26.657857 | orchestrator | 2025-03-23 14:18:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:26.657990 | orchestrator | 2025-03-23 14:18:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:29.703994 | orchestrator | 2025-03-23 14:18:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:29.704125 | orchestrator | 2025-03-23 14:18:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:32.757790 | orchestrator | 2025-03-23 14:18:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:32.757919 | orchestrator | 2025-03-23 14:18:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:35.807329 | orchestrator | 2025-03-23 14:18:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:35.807528 | orchestrator | 2025-03-23 14:18:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:38.861477 | orchestrator | 2025-03-23 14:18:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:38.861604 | orchestrator | 2025-03-23 14:18:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:41.910881 | orchestrator | 2025-03-23 14:18:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:41.911019 | orchestrator | 2025-03-23 14:18:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:44.965855 | orchestrator | 2025-03-23 14:18:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:44.965991 | orchestrator | 2025-03-23 14:18:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:48.016298 | orchestrator | 2025-03-23 14:18:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:48.016462 | orchestrator | 2025-03-23 14:18:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:51.071935 | orchestrator | 2025-03-23 14:18:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:51.072066 | orchestrator | 2025-03-23 14:18:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:54.120551 | orchestrator | 2025-03-23 14:18:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:54.120690 | orchestrator | 2025-03-23 14:18:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:18:57.161954 | orchestrator | 2025-03-23 14:18:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:18:57.162140 | orchestrator | 2025-03-23 14:18:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:00.207607 | orchestrator | 2025-03-23 14:18:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:00.207748 | orchestrator | 2025-03-23 14:19:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:03.257772 | orchestrator | 2025-03-23 14:19:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:03.257897 | orchestrator | 2025-03-23 14:19:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:06.303165 | orchestrator | 2025-03-23 14:19:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:06.303298 | orchestrator | 2025-03-23 14:19:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:09.348976 | orchestrator | 2025-03-23 14:19:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:09.349110 | orchestrator | 2025-03-23 14:19:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:12.399662 | orchestrator | 2025-03-23 14:19:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:12.399794 | orchestrator | 2025-03-23 14:19:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:15.454738 | orchestrator | 2025-03-23 14:19:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:15.454882 | orchestrator | 2025-03-23 14:19:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:18.502888 | orchestrator | 2025-03-23 14:19:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:18.503025 | orchestrator | 2025-03-23 14:19:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:21.565735 | orchestrator | 2025-03-23 14:19:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:21.565877 | orchestrator | 2025-03-23 14:19:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:24.620364 | orchestrator | 2025-03-23 14:19:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:24.620504 | orchestrator | 2025-03-23 14:19:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:24.620704 | orchestrator | 2025-03-23 14:19:24 | INFO  | Task c113cb79-f521-455d-879b-9d77ed1914e6 is in state STARTED 2025-03-23 14:19:27.674670 | orchestrator | 2025-03-23 14:19:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:27.674804 | orchestrator | 2025-03-23 14:19:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:27.675510 | orchestrator | 2025-03-23 14:19:27 | INFO  | Task c113cb79-f521-455d-879b-9d77ed1914e6 is in state STARTED 2025-03-23 14:19:30.725551 | orchestrator | 2025-03-23 14:19:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:30.725689 | orchestrator | 2025-03-23 14:19:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:30.727077 | orchestrator | 2025-03-23 14:19:30 | INFO  | Task c113cb79-f521-455d-879b-9d77ed1914e6 is in state STARTED 2025-03-23 14:19:33.790173 | orchestrator | 2025-03-23 14:19:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:33.790302 | orchestrator | 2025-03-23 14:19:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:33.792283 | orchestrator | 2025-03-23 14:19:33 | INFO  | Task c113cb79-f521-455d-879b-9d77ed1914e6 is in state STARTED 2025-03-23 14:19:36.853133 | orchestrator | 2025-03-23 14:19:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:36.853253 | orchestrator | 2025-03-23 14:19:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:36.855281 | orchestrator | 2025-03-23 14:19:36 | INFO  | Task c113cb79-f521-455d-879b-9d77ed1914e6 is in state SUCCESS 2025-03-23 14:19:39.903802 | orchestrator | 2025-03-23 14:19:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:39.903926 | orchestrator | 2025-03-23 14:19:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:42.956206 | orchestrator | 2025-03-23 14:19:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:42.956394 | orchestrator | 2025-03-23 14:19:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:46.000377 | orchestrator | 2025-03-23 14:19:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:46.000520 | orchestrator | 2025-03-23 14:19:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:49.046864 | orchestrator | 2025-03-23 14:19:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:49.047000 | orchestrator | 2025-03-23 14:19:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:52.104728 | orchestrator | 2025-03-23 14:19:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:52.104859 | orchestrator | 2025-03-23 14:19:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:55.145825 | orchestrator | 2025-03-23 14:19:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:55.145958 | orchestrator | 2025-03-23 14:19:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:19:58.190659 | orchestrator | 2025-03-23 14:19:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:19:58.190814 | orchestrator | 2025-03-23 14:19:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:01.237638 | orchestrator | 2025-03-23 14:19:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:01.237746 | orchestrator | 2025-03-23 14:20:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:04.284655 | orchestrator | 2025-03-23 14:20:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:04.284772 | orchestrator | 2025-03-23 14:20:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:07.344026 | orchestrator | 2025-03-23 14:20:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:07.344163 | orchestrator | 2025-03-23 14:20:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:10.397844 | orchestrator | 2025-03-23 14:20:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:10.397981 | orchestrator | 2025-03-23 14:20:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:13.454859 | orchestrator | 2025-03-23 14:20:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:13.454984 | orchestrator | 2025-03-23 14:20:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:16.517405 | orchestrator | 2025-03-23 14:20:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:16.517540 | orchestrator | 2025-03-23 14:20:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:19.561743 | orchestrator | 2025-03-23 14:20:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:19.561880 | orchestrator | 2025-03-23 14:20:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:22.615578 | orchestrator | 2025-03-23 14:20:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:22.615717 | orchestrator | 2025-03-23 14:20:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:25.665581 | orchestrator | 2025-03-23 14:20:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:25.666228 | orchestrator | 2025-03-23 14:20:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:28.720407 | orchestrator | 2025-03-23 14:20:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:28.720550 | orchestrator | 2025-03-23 14:20:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:31.776059 | orchestrator | 2025-03-23 14:20:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:31.776190 | orchestrator | 2025-03-23 14:20:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:34.826270 | orchestrator | 2025-03-23 14:20:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:34.826438 | orchestrator | 2025-03-23 14:20:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:37.882710 | orchestrator | 2025-03-23 14:20:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:37.882814 | orchestrator | 2025-03-23 14:20:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:40.933916 | orchestrator | 2025-03-23 14:20:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:40.934116 | orchestrator | 2025-03-23 14:20:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:43.992085 | orchestrator | 2025-03-23 14:20:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:43.992237 | orchestrator | 2025-03-23 14:20:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:47.050750 | orchestrator | 2025-03-23 14:20:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:47.050892 | orchestrator | 2025-03-23 14:20:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:50.103511 | orchestrator | 2025-03-23 14:20:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:50.103648 | orchestrator | 2025-03-23 14:20:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:53.151045 | orchestrator | 2025-03-23 14:20:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:53.151180 | orchestrator | 2025-03-23 14:20:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:56.204714 | orchestrator | 2025-03-23 14:20:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:56.204847 | orchestrator | 2025-03-23 14:20:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:20:59.244734 | orchestrator | 2025-03-23 14:20:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:20:59.244883 | orchestrator | 2025-03-23 14:20:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:02.295447 | orchestrator | 2025-03-23 14:20:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:02.295586 | orchestrator | 2025-03-23 14:21:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:05.341881 | orchestrator | 2025-03-23 14:21:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:05.342007 | orchestrator | 2025-03-23 14:21:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:08.381001 | orchestrator | 2025-03-23 14:21:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:08.381124 | orchestrator | 2025-03-23 14:21:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:11.438414 | orchestrator | 2025-03-23 14:21:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:11.438550 | orchestrator | 2025-03-23 14:21:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:14.493381 | orchestrator | 2025-03-23 14:21:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:14.493514 | orchestrator | 2025-03-23 14:21:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:17.545521 | orchestrator | 2025-03-23 14:21:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:17.545661 | orchestrator | 2025-03-23 14:21:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:20.593767 | orchestrator | 2025-03-23 14:21:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:20.593895 | orchestrator | 2025-03-23 14:21:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:23.643502 | orchestrator | 2025-03-23 14:21:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:23.643633 | orchestrator | 2025-03-23 14:21:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:26.690912 | orchestrator | 2025-03-23 14:21:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:26.691018 | orchestrator | 2025-03-23 14:21:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:29.739212 | orchestrator | 2025-03-23 14:21:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:29.739422 | orchestrator | 2025-03-23 14:21:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:32.785665 | orchestrator | 2025-03-23 14:21:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:32.785795 | orchestrator | 2025-03-23 14:21:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:35.838311 | orchestrator | 2025-03-23 14:21:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:35.838438 | orchestrator | 2025-03-23 14:21:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:38.888236 | orchestrator | 2025-03-23 14:21:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:38.888398 | orchestrator | 2025-03-23 14:21:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:41.939609 | orchestrator | 2025-03-23 14:21:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:41.939766 | orchestrator | 2025-03-23 14:21:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:44.992869 | orchestrator | 2025-03-23 14:21:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:44.992998 | orchestrator | 2025-03-23 14:21:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:48.036849 | orchestrator | 2025-03-23 14:21:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:48.036989 | orchestrator | 2025-03-23 14:21:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:51.093659 | orchestrator | 2025-03-23 14:21:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:51.093798 | orchestrator | 2025-03-23 14:21:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:54.149334 | orchestrator | 2025-03-23 14:21:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:54.149470 | orchestrator | 2025-03-23 14:21:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:21:57.204291 | orchestrator | 2025-03-23 14:21:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:21:57.204426 | orchestrator | 2025-03-23 14:21:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:00.255167 | orchestrator | 2025-03-23 14:21:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:00.255375 | orchestrator | 2025-03-23 14:22:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:03.308222 | orchestrator | 2025-03-23 14:22:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:03.308411 | orchestrator | 2025-03-23 14:22:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:06.349947 | orchestrator | 2025-03-23 14:22:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:06.350116 | orchestrator | 2025-03-23 14:22:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:09.398356 | orchestrator | 2025-03-23 14:22:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:09.398496 | orchestrator | 2025-03-23 14:22:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:12.451289 | orchestrator | 2025-03-23 14:22:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:12.451413 | orchestrator | 2025-03-23 14:22:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:15.503132 | orchestrator | 2025-03-23 14:22:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:15.503355 | orchestrator | 2025-03-23 14:22:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:18.548792 | orchestrator | 2025-03-23 14:22:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:18.548928 | orchestrator | 2025-03-23 14:22:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:21.596046 | orchestrator | 2025-03-23 14:22:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:21.596192 | orchestrator | 2025-03-23 14:22:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:24.643911 | orchestrator | 2025-03-23 14:22:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:24.644041 | orchestrator | 2025-03-23 14:22:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:27.697393 | orchestrator | 2025-03-23 14:22:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:27.697531 | orchestrator | 2025-03-23 14:22:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:30.740636 | orchestrator | 2025-03-23 14:22:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:30.740763 | orchestrator | 2025-03-23 14:22:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:33.791596 | orchestrator | 2025-03-23 14:22:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:33.791723 | orchestrator | 2025-03-23 14:22:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:36.844579 | orchestrator | 2025-03-23 14:22:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:36.844710 | orchestrator | 2025-03-23 14:22:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:39.893600 | orchestrator | 2025-03-23 14:22:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:39.893737 | orchestrator | 2025-03-23 14:22:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:42.943045 | orchestrator | 2025-03-23 14:22:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:42.943171 | orchestrator | 2025-03-23 14:22:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:45.986923 | orchestrator | 2025-03-23 14:22:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:45.987052 | orchestrator | 2025-03-23 14:22:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:49.040287 | orchestrator | 2025-03-23 14:22:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:49.040422 | orchestrator | 2025-03-23 14:22:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:52.089858 | orchestrator | 2025-03-23 14:22:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:52.089994 | orchestrator | 2025-03-23 14:22:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:55.132141 | orchestrator | 2025-03-23 14:22:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:55.132334 | orchestrator | 2025-03-23 14:22:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:22:58.179503 | orchestrator | 2025-03-23 14:22:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:22:58.179637 | orchestrator | 2025-03-23 14:22:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:01.235562 | orchestrator | 2025-03-23 14:22:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:01.235691 | orchestrator | 2025-03-23 14:23:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:04.295640 | orchestrator | 2025-03-23 14:23:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:04.295778 | orchestrator | 2025-03-23 14:23:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:07.343399 | orchestrator | 2025-03-23 14:23:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:07.343523 | orchestrator | 2025-03-23 14:23:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:10.391361 | orchestrator | 2025-03-23 14:23:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:10.391494 | orchestrator | 2025-03-23 14:23:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:13.435556 | orchestrator | 2025-03-23 14:23:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:13.435690 | orchestrator | 2025-03-23 14:23:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:16.468183 | orchestrator | 2025-03-23 14:23:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:16.468380 | orchestrator | 2025-03-23 14:23:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:19.517655 | orchestrator | 2025-03-23 14:23:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:19.517787 | orchestrator | 2025-03-23 14:23:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:22.568756 | orchestrator | 2025-03-23 14:23:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:22.568894 | orchestrator | 2025-03-23 14:23:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:25.619846 | orchestrator | 2025-03-23 14:23:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:25.619982 | orchestrator | 2025-03-23 14:23:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:28.666158 | orchestrator | 2025-03-23 14:23:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:28.666346 | orchestrator | 2025-03-23 14:23:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:31.715706 | orchestrator | 2025-03-23 14:23:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:31.715849 | orchestrator | 2025-03-23 14:23:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:34.768076 | orchestrator | 2025-03-23 14:23:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:34.768266 | orchestrator | 2025-03-23 14:23:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:37.821146 | orchestrator | 2025-03-23 14:23:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:37.821320 | orchestrator | 2025-03-23 14:23:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:40.872355 | orchestrator | 2025-03-23 14:23:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:40.872482 | orchestrator | 2025-03-23 14:23:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:43.925352 | orchestrator | 2025-03-23 14:23:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:43.925490 | orchestrator | 2025-03-23 14:23:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:46.980288 | orchestrator | 2025-03-23 14:23:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:46.980412 | orchestrator | 2025-03-23 14:23:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:50.044313 | orchestrator | 2025-03-23 14:23:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:50.044457 | orchestrator | 2025-03-23 14:23:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:53.096580 | orchestrator | 2025-03-23 14:23:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:53.096720 | orchestrator | 2025-03-23 14:23:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:56.147565 | orchestrator | 2025-03-23 14:23:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:56.147706 | orchestrator | 2025-03-23 14:23:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:23:59.198259 | orchestrator | 2025-03-23 14:23:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:23:59.198393 | orchestrator | 2025-03-23 14:23:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:02.249211 | orchestrator | 2025-03-23 14:23:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:02.249357 | orchestrator | 2025-03-23 14:24:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:05.297865 | orchestrator | 2025-03-23 14:24:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:05.297988 | orchestrator | 2025-03-23 14:24:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:05.298317 | orchestrator | 2025-03-23 14:24:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:08.341622 | orchestrator | 2025-03-23 14:24:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:11.390786 | orchestrator | 2025-03-23 14:24:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:11.390921 | orchestrator | 2025-03-23 14:24:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:14.437551 | orchestrator | 2025-03-23 14:24:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:14.437693 | orchestrator | 2025-03-23 14:24:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:17.477318 | orchestrator | 2025-03-23 14:24:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:17.477461 | orchestrator | 2025-03-23 14:24:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:20.533040 | orchestrator | 2025-03-23 14:24:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:20.533227 | orchestrator | 2025-03-23 14:24:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:23.586912 | orchestrator | 2025-03-23 14:24:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:23.587035 | orchestrator | 2025-03-23 14:24:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:26.633514 | orchestrator | 2025-03-23 14:24:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:26.633639 | orchestrator | 2025-03-23 14:24:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:29.678002 | orchestrator | 2025-03-23 14:24:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:29.679084 | orchestrator | 2025-03-23 14:24:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:32.725083 | orchestrator | 2025-03-23 14:24:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:32.725278 | orchestrator | 2025-03-23 14:24:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:35.772952 | orchestrator | 2025-03-23 14:24:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:35.773074 | orchestrator | 2025-03-23 14:24:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:38.823228 | orchestrator | 2025-03-23 14:24:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:38.823364 | orchestrator | 2025-03-23 14:24:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:41.867342 | orchestrator | 2025-03-23 14:24:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:41.867484 | orchestrator | 2025-03-23 14:24:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:44.917361 | orchestrator | 2025-03-23 14:24:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:44.917503 | orchestrator | 2025-03-23 14:24:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:47.973799 | orchestrator | 2025-03-23 14:24:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:47.973897 | orchestrator | 2025-03-23 14:24:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:51.027294 | orchestrator | 2025-03-23 14:24:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:51.027435 | orchestrator | 2025-03-23 14:24:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:54.091812 | orchestrator | 2025-03-23 14:24:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:54.091957 | orchestrator | 2025-03-23 14:24:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:24:57.138246 | orchestrator | 2025-03-23 14:24:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:24:57.138383 | orchestrator | 2025-03-23 14:24:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:00.189324 | orchestrator | 2025-03-23 14:24:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:00.189464 | orchestrator | 2025-03-23 14:25:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:03.247227 | orchestrator | 2025-03-23 14:25:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:03.247369 | orchestrator | 2025-03-23 14:25:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:06.300650 | orchestrator | 2025-03-23 14:25:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:06.300764 | orchestrator | 2025-03-23 14:25:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:09.345249 | orchestrator | 2025-03-23 14:25:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:09.345389 | orchestrator | 2025-03-23 14:25:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:12.391439 | orchestrator | 2025-03-23 14:25:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:12.391574 | orchestrator | 2025-03-23 14:25:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:15.436412 | orchestrator | 2025-03-23 14:25:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:15.436555 | orchestrator | 2025-03-23 14:25:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:18.477896 | orchestrator | 2025-03-23 14:25:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:18.478147 | orchestrator | 2025-03-23 14:25:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:21.526664 | orchestrator | 2025-03-23 14:25:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:21.526778 | orchestrator | 2025-03-23 14:25:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:24.574423 | orchestrator | 2025-03-23 14:25:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:24.574567 | orchestrator | 2025-03-23 14:25:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:27.624124 | orchestrator | 2025-03-23 14:25:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:27.624309 | orchestrator | 2025-03-23 14:25:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:30.679695 | orchestrator | 2025-03-23 14:25:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:30.679818 | orchestrator | 2025-03-23 14:25:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:33.732021 | orchestrator | 2025-03-23 14:25:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:33.732224 | orchestrator | 2025-03-23 14:25:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:36.789480 | orchestrator | 2025-03-23 14:25:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:36.789598 | orchestrator | 2025-03-23 14:25:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:39.832778 | orchestrator | 2025-03-23 14:25:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:39.832911 | orchestrator | 2025-03-23 14:25:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:42.877560 | orchestrator | 2025-03-23 14:25:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:42.877703 | orchestrator | 2025-03-23 14:25:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:45.927709 | orchestrator | 2025-03-23 14:25:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:45.927848 | orchestrator | 2025-03-23 14:25:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:48.972657 | orchestrator | 2025-03-23 14:25:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:48.972780 | orchestrator | 2025-03-23 14:25:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:52.032313 | orchestrator | 2025-03-23 14:25:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:52.032464 | orchestrator | 2025-03-23 14:25:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:55.085736 | orchestrator | 2025-03-23 14:25:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:55.085865 | orchestrator | 2025-03-23 14:25:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:25:58.133336 | orchestrator | 2025-03-23 14:25:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:25:58.133461 | orchestrator | 2025-03-23 14:25:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:01.188595 | orchestrator | 2025-03-23 14:25:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:01.188719 | orchestrator | 2025-03-23 14:26:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:04.243537 | orchestrator | 2025-03-23 14:26:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:04.243676 | orchestrator | 2025-03-23 14:26:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:07.298009 | orchestrator | 2025-03-23 14:26:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:07.298217 | orchestrator | 2025-03-23 14:26:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:10.340654 | orchestrator | 2025-03-23 14:26:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:10.340790 | orchestrator | 2025-03-23 14:26:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:13.394613 | orchestrator | 2025-03-23 14:26:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:13.394753 | orchestrator | 2025-03-23 14:26:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:16.444593 | orchestrator | 2025-03-23 14:26:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:16.444732 | orchestrator | 2025-03-23 14:26:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:19.491367 | orchestrator | 2025-03-23 14:26:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:19.491511 | orchestrator | 2025-03-23 14:26:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:22.536185 | orchestrator | 2025-03-23 14:26:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:22.536321 | orchestrator | 2025-03-23 14:26:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:25.572717 | orchestrator | 2025-03-23 14:26:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:25.572849 | orchestrator | 2025-03-23 14:26:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:28.626108 | orchestrator | 2025-03-23 14:26:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:28.626271 | orchestrator | 2025-03-23 14:26:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:31.672806 | orchestrator | 2025-03-23 14:26:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:31.672937 | orchestrator | 2025-03-23 14:26:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:34.729637 | orchestrator | 2025-03-23 14:26:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:34.729782 | orchestrator | 2025-03-23 14:26:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:37.773515 | orchestrator | 2025-03-23 14:26:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:37.773652 | orchestrator | 2025-03-23 14:26:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:40.824038 | orchestrator | 2025-03-23 14:26:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:40.824213 | orchestrator | 2025-03-23 14:26:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:43.870726 | orchestrator | 2025-03-23 14:26:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:43.870863 | orchestrator | 2025-03-23 14:26:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:46.923041 | orchestrator | 2025-03-23 14:26:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:46.923215 | orchestrator | 2025-03-23 14:26:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:49.975068 | orchestrator | 2025-03-23 14:26:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:49.975237 | orchestrator | 2025-03-23 14:26:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:53.033512 | orchestrator | 2025-03-23 14:26:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:53.033639 | orchestrator | 2025-03-23 14:26:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:56.090759 | orchestrator | 2025-03-23 14:26:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:56.090873 | orchestrator | 2025-03-23 14:26:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:26:59.148843 | orchestrator | 2025-03-23 14:26:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:26:59.148982 | orchestrator | 2025-03-23 14:26:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:02.197557 | orchestrator | 2025-03-23 14:26:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:02.197699 | orchestrator | 2025-03-23 14:27:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:05.247957 | orchestrator | 2025-03-23 14:27:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:05.248087 | orchestrator | 2025-03-23 14:27:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:08.296786 | orchestrator | 2025-03-23 14:27:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:08.296919 | orchestrator | 2025-03-23 14:27:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:11.351150 | orchestrator | 2025-03-23 14:27:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:11.351303 | orchestrator | 2025-03-23 14:27:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:14.393698 | orchestrator | 2025-03-23 14:27:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:14.393833 | orchestrator | 2025-03-23 14:27:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:17.449868 | orchestrator | 2025-03-23 14:27:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:17.450001 | orchestrator | 2025-03-23 14:27:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:20.493740 | orchestrator | 2025-03-23 14:27:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:20.493867 | orchestrator | 2025-03-23 14:27:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:23.540548 | orchestrator | 2025-03-23 14:27:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:23.540684 | orchestrator | 2025-03-23 14:27:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:26.592739 | orchestrator | 2025-03-23 14:27:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:26.592880 | orchestrator | 2025-03-23 14:27:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:29.646451 | orchestrator | 2025-03-23 14:27:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:29.646584 | orchestrator | 2025-03-23 14:27:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:32.697792 | orchestrator | 2025-03-23 14:27:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:32.697940 | orchestrator | 2025-03-23 14:27:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:35.744806 | orchestrator | 2025-03-23 14:27:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:35.744935 | orchestrator | 2025-03-23 14:27:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:38.792307 | orchestrator | 2025-03-23 14:27:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:38.792441 | orchestrator | 2025-03-23 14:27:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:41.843857 | orchestrator | 2025-03-23 14:27:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:41.843986 | orchestrator | 2025-03-23 14:27:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:44.896857 | orchestrator | 2025-03-23 14:27:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:44.896979 | orchestrator | 2025-03-23 14:27:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:47.955282 | orchestrator | 2025-03-23 14:27:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:47.955417 | orchestrator | 2025-03-23 14:27:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:51.004649 | orchestrator | 2025-03-23 14:27:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:51.004761 | orchestrator | 2025-03-23 14:27:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:54.045781 | orchestrator | 2025-03-23 14:27:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:54.045922 | orchestrator | 2025-03-23 14:27:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:27:57.101585 | orchestrator | 2025-03-23 14:27:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:27:57.101741 | orchestrator | 2025-03-23 14:27:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:00.154809 | orchestrator | 2025-03-23 14:27:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:00.154946 | orchestrator | 2025-03-23 14:28:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:03.209724 | orchestrator | 2025-03-23 14:28:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:03.209857 | orchestrator | 2025-03-23 14:28:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:06.269826 | orchestrator | 2025-03-23 14:28:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:06.269955 | orchestrator | 2025-03-23 14:28:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:09.311422 | orchestrator | 2025-03-23 14:28:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:09.311553 | orchestrator | 2025-03-23 14:28:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:12.358694 | orchestrator | 2025-03-23 14:28:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:12.358823 | orchestrator | 2025-03-23 14:28:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:15.407707 | orchestrator | 2025-03-23 14:28:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:15.407836 | orchestrator | 2025-03-23 14:28:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:18.460556 | orchestrator | 2025-03-23 14:28:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:18.460693 | orchestrator | 2025-03-23 14:28:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:21.500558 | orchestrator | 2025-03-23 14:28:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:21.500698 | orchestrator | 2025-03-23 14:28:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:24.547677 | orchestrator | 2025-03-23 14:28:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:24.547813 | orchestrator | 2025-03-23 14:28:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:27.594833 | orchestrator | 2025-03-23 14:28:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:27.594972 | orchestrator | 2025-03-23 14:28:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:30.647963 | orchestrator | 2025-03-23 14:28:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:30.648057 | orchestrator | 2025-03-23 14:28:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:33.702070 | orchestrator | 2025-03-23 14:28:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:33.702240 | orchestrator | 2025-03-23 14:28:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:36.763498 | orchestrator | 2025-03-23 14:28:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:36.763632 | orchestrator | 2025-03-23 14:28:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:39.817236 | orchestrator | 2025-03-23 14:28:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:39.817362 | orchestrator | 2025-03-23 14:28:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:42.872137 | orchestrator | 2025-03-23 14:28:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:42.872272 | orchestrator | 2025-03-23 14:28:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:45.919905 | orchestrator | 2025-03-23 14:28:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:45.920033 | orchestrator | 2025-03-23 14:28:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:48.968275 | orchestrator | 2025-03-23 14:28:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:48.968410 | orchestrator | 2025-03-23 14:28:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:52.021655 | orchestrator | 2025-03-23 14:28:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:52.021778 | orchestrator | 2025-03-23 14:28:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:55.078993 | orchestrator | 2025-03-23 14:28:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:55.079142 | orchestrator | 2025-03-23 14:28:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:28:58.125877 | orchestrator | 2025-03-23 14:28:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:28:58.126002 | orchestrator | 2025-03-23 14:28:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:01.176821 | orchestrator | 2025-03-23 14:28:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:01.176986 | orchestrator | 2025-03-23 14:29:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:04.227764 | orchestrator | 2025-03-23 14:29:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:04.227897 | orchestrator | 2025-03-23 14:29:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:07.271832 | orchestrator | 2025-03-23 14:29:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:07.271962 | orchestrator | 2025-03-23 14:29:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:10.324908 | orchestrator | 2025-03-23 14:29:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:10.325031 | orchestrator | 2025-03-23 14:29:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:13.381922 | orchestrator | 2025-03-23 14:29:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:13.382125 | orchestrator | 2025-03-23 14:29:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:16.446290 | orchestrator | 2025-03-23 14:29:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:16.446417 | orchestrator | 2025-03-23 14:29:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:19.489542 | orchestrator | 2025-03-23 14:29:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:19.489680 | orchestrator | 2025-03-23 14:29:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:22.542284 | orchestrator | 2025-03-23 14:29:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:22.542413 | orchestrator | 2025-03-23 14:29:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:25.589700 | orchestrator | 2025-03-23 14:29:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:25.589848 | orchestrator | 2025-03-23 14:29:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:25.591177 | orchestrator | 2025-03-23 14:29:25 | INFO  | Task 9c91b218-bf7d-4dfd-90d3-db53f3887568 is in state STARTED 2025-03-23 14:29:28.657529 | orchestrator | 2025-03-23 14:29:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:28.657660 | orchestrator | 2025-03-23 14:29:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:28.660521 | orchestrator | 2025-03-23 14:29:28 | INFO  | Task 9c91b218-bf7d-4dfd-90d3-db53f3887568 is in state STARTED 2025-03-23 14:29:31.718429 | orchestrator | 2025-03-23 14:29:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:31.718575 | orchestrator | 2025-03-23 14:29:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:31.719819 | orchestrator | 2025-03-23 14:29:31 | INFO  | Task 9c91b218-bf7d-4dfd-90d3-db53f3887568 is in state STARTED 2025-03-23 14:29:34.786326 | orchestrator | 2025-03-23 14:29:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:34.786468 | orchestrator | 2025-03-23 14:29:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:34.787749 | orchestrator | 2025-03-23 14:29:34 | INFO  | Task 9c91b218-bf7d-4dfd-90d3-db53f3887568 is in state STARTED 2025-03-23 14:29:37.832327 | orchestrator | 2025-03-23 14:29:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:37.832441 | orchestrator | 2025-03-23 14:29:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:37.833055 | orchestrator | 2025-03-23 14:29:37 | INFO  | Task 9c91b218-bf7d-4dfd-90d3-db53f3887568 is in state SUCCESS 2025-03-23 14:29:40.878661 | orchestrator | 2025-03-23 14:29:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:40.878791 | orchestrator | 2025-03-23 14:29:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:43.925354 | orchestrator | 2025-03-23 14:29:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:43.925482 | orchestrator | 2025-03-23 14:29:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:46.981058 | orchestrator | 2025-03-23 14:29:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:46.981217 | orchestrator | 2025-03-23 14:29:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:50.046335 | orchestrator | 2025-03-23 14:29:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:50.046478 | orchestrator | 2025-03-23 14:29:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:53.103439 | orchestrator | 2025-03-23 14:29:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:53.103573 | orchestrator | 2025-03-23 14:29:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:56.163048 | orchestrator | 2025-03-23 14:29:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:56.163202 | orchestrator | 2025-03-23 14:29:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:29:59.211485 | orchestrator | 2025-03-23 14:29:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:29:59.211602 | orchestrator | 2025-03-23 14:29:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:02.274804 | orchestrator | 2025-03-23 14:29:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:02.274944 | orchestrator | 2025-03-23 14:30:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:05.331216 | orchestrator | 2025-03-23 14:30:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:05.331353 | orchestrator | 2025-03-23 14:30:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:08.380465 | orchestrator | 2025-03-23 14:30:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:08.380584 | orchestrator | 2025-03-23 14:30:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:11.434867 | orchestrator | 2025-03-23 14:30:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:11.434988 | orchestrator | 2025-03-23 14:30:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:14.490856 | orchestrator | 2025-03-23 14:30:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:14.490988 | orchestrator | 2025-03-23 14:30:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:17.545190 | orchestrator | 2025-03-23 14:30:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:17.545317 | orchestrator | 2025-03-23 14:30:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:20.595630 | orchestrator | 2025-03-23 14:30:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:20.595779 | orchestrator | 2025-03-23 14:30:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:23.652708 | orchestrator | 2025-03-23 14:30:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:23.652838 | orchestrator | 2025-03-23 14:30:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:23.653661 | orchestrator | 2025-03-23 14:30:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:26.697681 | orchestrator | 2025-03-23 14:30:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:29.750318 | orchestrator | 2025-03-23 14:30:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:29.750434 | orchestrator | 2025-03-23 14:30:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:32.801154 | orchestrator | 2025-03-23 14:30:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:32.801285 | orchestrator | 2025-03-23 14:30:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:35.848541 | orchestrator | 2025-03-23 14:30:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:35.848664 | orchestrator | 2025-03-23 14:30:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:38.890726 | orchestrator | 2025-03-23 14:30:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:38.890957 | orchestrator | 2025-03-23 14:30:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:41.936680 | orchestrator | 2025-03-23 14:30:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:41.936808 | orchestrator | 2025-03-23 14:30:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:44.985148 | orchestrator | 2025-03-23 14:30:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:44.985291 | orchestrator | 2025-03-23 14:30:44 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:48.038487 | orchestrator | 2025-03-23 14:30:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:48.038641 | orchestrator | 2025-03-23 14:30:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:51.094822 | orchestrator | 2025-03-23 14:30:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:51.094946 | orchestrator | 2025-03-23 14:30:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:54.143623 | orchestrator | 2025-03-23 14:30:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:54.143748 | orchestrator | 2025-03-23 14:30:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:30:57.192590 | orchestrator | 2025-03-23 14:30:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:30:57.192732 | orchestrator | 2025-03-23 14:30:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:00.259260 | orchestrator | 2025-03-23 14:30:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:00.259382 | orchestrator | 2025-03-23 14:31:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:03.307609 | orchestrator | 2025-03-23 14:31:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:03.307751 | orchestrator | 2025-03-23 14:31:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:06.360839 | orchestrator | 2025-03-23 14:31:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:06.360977 | orchestrator | 2025-03-23 14:31:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:09.405387 | orchestrator | 2025-03-23 14:31:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:09.405529 | orchestrator | 2025-03-23 14:31:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:12.453994 | orchestrator | 2025-03-23 14:31:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:12.454221 | orchestrator | 2025-03-23 14:31:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:15.503980 | orchestrator | 2025-03-23 14:31:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:15.504224 | orchestrator | 2025-03-23 14:31:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:18.553830 | orchestrator | 2025-03-23 14:31:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:18.553957 | orchestrator | 2025-03-23 14:31:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:21.613867 | orchestrator | 2025-03-23 14:31:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:21.614012 | orchestrator | 2025-03-23 14:31:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:24.670817 | orchestrator | 2025-03-23 14:31:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:24.670954 | orchestrator | 2025-03-23 14:31:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:27.723576 | orchestrator | 2025-03-23 14:31:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:27.723739 | orchestrator | 2025-03-23 14:31:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:30.770462 | orchestrator | 2025-03-23 14:31:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:30.770594 | orchestrator | 2025-03-23 14:31:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:33.825509 | orchestrator | 2025-03-23 14:31:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:33.825689 | orchestrator | 2025-03-23 14:31:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:36.880479 | orchestrator | 2025-03-23 14:31:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:36.880607 | orchestrator | 2025-03-23 14:31:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:39.932623 | orchestrator | 2025-03-23 14:31:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:39.932763 | orchestrator | 2025-03-23 14:31:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:42.975010 | orchestrator | 2025-03-23 14:31:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:42.975185 | orchestrator | 2025-03-23 14:31:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:46.021747 | orchestrator | 2025-03-23 14:31:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:46.021891 | orchestrator | 2025-03-23 14:31:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:49.071437 | orchestrator | 2025-03-23 14:31:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:49.071575 | orchestrator | 2025-03-23 14:31:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:52.128418 | orchestrator | 2025-03-23 14:31:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:52.128564 | orchestrator | 2025-03-23 14:31:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:55.172602 | orchestrator | 2025-03-23 14:31:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:55.172737 | orchestrator | 2025-03-23 14:31:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:31:58.222099 | orchestrator | 2025-03-23 14:31:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:31:58.222237 | orchestrator | 2025-03-23 14:31:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:01.283822 | orchestrator | 2025-03-23 14:31:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:01.283948 | orchestrator | 2025-03-23 14:32:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:04.332480 | orchestrator | 2025-03-23 14:32:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:04.332622 | orchestrator | 2025-03-23 14:32:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:07.388502 | orchestrator | 2025-03-23 14:32:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:07.388642 | orchestrator | 2025-03-23 14:32:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:10.441930 | orchestrator | 2025-03-23 14:32:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:10.442176 | orchestrator | 2025-03-23 14:32:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:13.487183 | orchestrator | 2025-03-23 14:32:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:13.487315 | orchestrator | 2025-03-23 14:32:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:16.532780 | orchestrator | 2025-03-23 14:32:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:16.532920 | orchestrator | 2025-03-23 14:32:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:19.584462 | orchestrator | 2025-03-23 14:32:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:19.584622 | orchestrator | 2025-03-23 14:32:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:22.636667 | orchestrator | 2025-03-23 14:32:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:22.636804 | orchestrator | 2025-03-23 14:32:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:25.675824 | orchestrator | 2025-03-23 14:32:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:25.675958 | orchestrator | 2025-03-23 14:32:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:28.724784 | orchestrator | 2025-03-23 14:32:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:28.724921 | orchestrator | 2025-03-23 14:32:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:31.783839 | orchestrator | 2025-03-23 14:32:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:31.783943 | orchestrator | 2025-03-23 14:32:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:34.838549 | orchestrator | 2025-03-23 14:32:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:34.838693 | orchestrator | 2025-03-23 14:32:34 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:37.893431 | orchestrator | 2025-03-23 14:32:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:37.893563 | orchestrator | 2025-03-23 14:32:37 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:40.947484 | orchestrator | 2025-03-23 14:32:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:40.947626 | orchestrator | 2025-03-23 14:32:40 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:43.997672 | orchestrator | 2025-03-23 14:32:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:43.997803 | orchestrator | 2025-03-23 14:32:43 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:47.054344 | orchestrator | 2025-03-23 14:32:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:47.054472 | orchestrator | 2025-03-23 14:32:47 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:50.106304 | orchestrator | 2025-03-23 14:32:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:50.106435 | orchestrator | 2025-03-23 14:32:50 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:53.156785 | orchestrator | 2025-03-23 14:32:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:53.156926 | orchestrator | 2025-03-23 14:32:53 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:56.205496 | orchestrator | 2025-03-23 14:32:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:56.205637 | orchestrator | 2025-03-23 14:32:56 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:32:59.259572 | orchestrator | 2025-03-23 14:32:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:32:59.259721 | orchestrator | 2025-03-23 14:32:59 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:02.305545 | orchestrator | 2025-03-23 14:32:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:02.305678 | orchestrator | 2025-03-23 14:33:02 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:05.360389 | orchestrator | 2025-03-23 14:33:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:05.360527 | orchestrator | 2025-03-23 14:33:05 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:08.409635 | orchestrator | 2025-03-23 14:33:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:08.409749 | orchestrator | 2025-03-23 14:33:08 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:11.466780 | orchestrator | 2025-03-23 14:33:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:11.466915 | orchestrator | 2025-03-23 14:33:11 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:14.520485 | orchestrator | 2025-03-23 14:33:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:14.520611 | orchestrator | 2025-03-23 14:33:14 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:17.575121 | orchestrator | 2025-03-23 14:33:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:17.575257 | orchestrator | 2025-03-23 14:33:17 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:20.623633 | orchestrator | 2025-03-23 14:33:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:20.623758 | orchestrator | 2025-03-23 14:33:20 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:23.675337 | orchestrator | 2025-03-23 14:33:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:23.675504 | orchestrator | 2025-03-23 14:33:23 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:26.724865 | orchestrator | 2025-03-23 14:33:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:26.724998 | orchestrator | 2025-03-23 14:33:26 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:29.772684 | orchestrator | 2025-03-23 14:33:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:29.772814 | orchestrator | 2025-03-23 14:33:29 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:32.818782 | orchestrator | 2025-03-23 14:33:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:32.818923 | orchestrator | 2025-03-23 14:33:32 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:35.873700 | orchestrator | 2025-03-23 14:33:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:35.873832 | orchestrator | 2025-03-23 14:33:35 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:38.926755 | orchestrator | 2025-03-23 14:33:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:38.926893 | orchestrator | 2025-03-23 14:33:38 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:41.991331 | orchestrator | 2025-03-23 14:33:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:41.991460 | orchestrator | 2025-03-23 14:33:41 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:45.040939 | orchestrator | 2025-03-23 14:33:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:45.041118 | orchestrator | 2025-03-23 14:33:45 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:48.094249 | orchestrator | 2025-03-23 14:33:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:48.094415 | orchestrator | 2025-03-23 14:33:48 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:51.143549 | orchestrator | 2025-03-23 14:33:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:51.143713 | orchestrator | 2025-03-23 14:33:51 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:54.194707 | orchestrator | 2025-03-23 14:33:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:54.194851 | orchestrator | 2025-03-23 14:33:54 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:33:57.243004 | orchestrator | 2025-03-23 14:33:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:33:57.243186 | orchestrator | 2025-03-23 14:33:57 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:00.297391 | orchestrator | 2025-03-23 14:33:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:00.297511 | orchestrator | 2025-03-23 14:34:00 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:03.345703 | orchestrator | 2025-03-23 14:34:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:03.345844 | orchestrator | 2025-03-23 14:34:03 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:06.387812 | orchestrator | 2025-03-23 14:34:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:06.387923 | orchestrator | 2025-03-23 14:34:06 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:09.440379 | orchestrator | 2025-03-23 14:34:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:09.440509 | orchestrator | 2025-03-23 14:34:09 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:12.490831 | orchestrator | 2025-03-23 14:34:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:12.490960 | orchestrator | 2025-03-23 14:34:12 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:15.530636 | orchestrator | 2025-03-23 14:34:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:15.530781 | orchestrator | 2025-03-23 14:34:15 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:18.572454 | orchestrator | 2025-03-23 14:34:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:18.572585 | orchestrator | 2025-03-23 14:34:18 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:21.628375 | orchestrator | 2025-03-23 14:34:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:21.628515 | orchestrator | 2025-03-23 14:34:21 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:24.678129 | orchestrator | 2025-03-23 14:34:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:24.678254 | orchestrator | 2025-03-23 14:34:24 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:27.731167 | orchestrator | 2025-03-23 14:34:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:27.731298 | orchestrator | 2025-03-23 14:34:27 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:30.786218 | orchestrator | 2025-03-23 14:34:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:30.786359 | orchestrator | 2025-03-23 14:34:30 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:33.828227 | orchestrator | 2025-03-23 14:34:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:33.828368 | orchestrator | 2025-03-23 14:34:33 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:36.877159 | orchestrator | 2025-03-23 14:34:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:36.877297 | orchestrator | 2025-03-23 14:34:36 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:39.923248 | orchestrator | 2025-03-23 14:34:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:39.923397 | orchestrator | 2025-03-23 14:34:39 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:42.974573 | orchestrator | 2025-03-23 14:34:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:42.974708 | orchestrator | 2025-03-23 14:34:42 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:46.026303 | orchestrator | 2025-03-23 14:34:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:46.026443 | orchestrator | 2025-03-23 14:34:46 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:49.075299 | orchestrator | 2025-03-23 14:34:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:49.075438 | orchestrator | 2025-03-23 14:34:49 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:52.125242 | orchestrator | 2025-03-23 14:34:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:52.125381 | orchestrator | 2025-03-23 14:34:52 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:55.181697 | orchestrator | 2025-03-23 14:34:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:55.181836 | orchestrator | 2025-03-23 14:34:55 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:34:58.229949 | orchestrator | 2025-03-23 14:34:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:34:58.230107 | orchestrator | 2025-03-23 14:34:58 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:01.284843 | orchestrator | 2025-03-23 14:34:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:01.284994 | orchestrator | 2025-03-23 14:35:01 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:04.335908 | orchestrator | 2025-03-23 14:35:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:04.336036 | orchestrator | 2025-03-23 14:35:04 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:07.400159 | orchestrator | 2025-03-23 14:35:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:07.400291 | orchestrator | 2025-03-23 14:35:07 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:10.453583 | orchestrator | 2025-03-23 14:35:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:10.453713 | orchestrator | 2025-03-23 14:35:10 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:13.503373 | orchestrator | 2025-03-23 14:35:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:13.503556 | orchestrator | 2025-03-23 14:35:13 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:16.547527 | orchestrator | 2025-03-23 14:35:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:16.547650 | orchestrator | 2025-03-23 14:35:16 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:19.597366 | orchestrator | 2025-03-23 14:35:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:19.597498 | orchestrator | 2025-03-23 14:35:19 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:22.649115 | orchestrator | 2025-03-23 14:35:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:22.649929 | orchestrator | 2025-03-23 14:35:22 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:25.695218 | orchestrator | 2025-03-23 14:35:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:25.695356 | orchestrator | 2025-03-23 14:35:25 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:28.746942 | orchestrator | 2025-03-23 14:35:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:28.747154 | orchestrator | 2025-03-23 14:35:28 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:31.787295 | orchestrator | 2025-03-23 14:35:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 14:35:31.787431 | orchestrator | 2025-03-23 14:35:31 | INFO  | Task f8079d8c-9512-4ecd-b2ac-9d3341f82384 is in state STARTED 2025-03-23 14:35:33.884164 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-23 14:35:33.891829 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-23 14:35:34.590740 | 2025-03-23 14:35:34.590894 | PLAY [Post output play] 2025-03-23 14:35:34.619765 | 2025-03-23 14:35:34.619886 | LOOP [stage-output : Register sources] 2025-03-23 14:35:34.709922 | 2025-03-23 14:35:34.710406 | TASK [stage-output : Check sudo] 2025-03-23 14:35:35.410399 | orchestrator | sudo: a password is required 2025-03-23 14:35:35.754537 | orchestrator | ok: Runtime: 0:00:00.015454 2025-03-23 14:35:35.777600 | 2025-03-23 14:35:35.777838 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-23 14:35:35.825028 | 2025-03-23 14:35:35.825287 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-23 14:35:35.908062 | orchestrator | ok 2025-03-23 14:35:35.919419 | 2025-03-23 14:35:35.919546 | LOOP [stage-output : Ensure target folders exist] 2025-03-23 14:35:36.358074 | orchestrator | ok: "docs" 2025-03-23 14:35:36.358669 | 2025-03-23 14:35:36.582280 | orchestrator | ok: "artifacts" 2025-03-23 14:35:36.817271 | orchestrator | ok: "logs" 2025-03-23 14:35:36.843529 | 2025-03-23 14:35:36.843670 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-23 14:35:36.878876 | 2025-03-23 14:35:36.879066 | TASK [stage-output : Make all log files readable] 2025-03-23 14:35:37.142726 | orchestrator | ok 2025-03-23 14:35:37.153240 | 2025-03-23 14:35:37.153368 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-23 14:35:37.199041 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:37.217017 | 2025-03-23 14:35:37.217166 | TASK [stage-output : Discover log files for compression] 2025-03-23 14:35:37.242547 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:37.256576 | 2025-03-23 14:35:37.256695 | LOOP [stage-output : Archive everything from logs] 2025-03-23 14:35:37.336747 | 2025-03-23 14:35:37.336888 | PLAY [Post cleanup play] 2025-03-23 14:35:37.391602 | 2025-03-23 14:35:37.391726 | TASK [Set cloud fact (Zuul deployment)] 2025-03-23 14:35:37.457800 | orchestrator | ok 2025-03-23 14:35:37.468821 | 2025-03-23 14:35:37.468923 | TASK [Set cloud fact (local deployment)] 2025-03-23 14:35:37.502944 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:37.516549 | 2025-03-23 14:35:37.516668 | TASK [Clean the cloud environment] 2025-03-23 14:35:38.101426 | orchestrator | 2025-03-23 14:35:38 - clean up servers 2025-03-23 14:35:38.942754 | orchestrator | 2025-03-23 14:35:38 - testbed-manager 2025-03-23 14:35:39.037794 | orchestrator | 2025-03-23 14:35:39 - testbed-node-3 2025-03-23 14:35:39.125805 | orchestrator | 2025-03-23 14:35:39 - testbed-node-4 2025-03-23 14:35:39.218684 | orchestrator | 2025-03-23 14:35:39 - testbed-node-5 2025-03-23 14:35:39.311972 | orchestrator | 2025-03-23 14:35:39 - testbed-node-2 2025-03-23 14:35:39.412521 | orchestrator | 2025-03-23 14:35:39 - testbed-node-0 2025-03-23 14:35:39.507885 | orchestrator | 2025-03-23 14:35:39 - testbed-node-1 2025-03-23 14:35:39.594136 | orchestrator | 2025-03-23 14:35:39 - clean up keypairs 2025-03-23 14:35:39.612131 | orchestrator | 2025-03-23 14:35:39 - testbed 2025-03-23 14:35:39.638623 | orchestrator | 2025-03-23 14:35:39 - wait for servers to be gone 2025-03-23 14:35:48.735257 | orchestrator | 2025-03-23 14:35:48 - clean up ports 2025-03-23 14:35:48.936396 | orchestrator | 2025-03-23 14:35:48 - 2a530607-e855-4437-a4e9-94f3c055e3af 2025-03-23 14:35:49.151872 | orchestrator | 2025-03-23 14:35:49 - 30841d5b-2a1a-4f86-ab02-1f94db8ee16d 2025-03-23 14:35:49.352812 | orchestrator | 2025-03-23 14:35:49 - 39dd17a0-4c09-48d2-8837-475c0d77c1b5 2025-03-23 14:35:49.711911 | orchestrator | 2025-03-23 14:35:49 - 421d5d32-ef26-42da-b577-d62ed9d2a1a3 2025-03-23 14:35:49.932140 | orchestrator | 2025-03-23 14:35:49 - 45855e77-56fd-4c85-b40f-18f32a050aef 2025-03-23 14:35:50.130646 | orchestrator | 2025-03-23 14:35:50 - d09537ff-1421-4fec-ac8d-4eccdc275ee4 2025-03-23 14:35:50.326779 | orchestrator | 2025-03-23 14:35:50 - e7ec1586-c8e5-4d80-b3f0-1bab50b5212a 2025-03-23 14:35:50.531087 | orchestrator | 2025-03-23 14:35:50 - clean up volumes 2025-03-23 14:35:50.673977 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-0-node-base 2025-03-23 14:35:50.708261 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-2-node-base 2025-03-23 14:35:50.748108 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-1-node-base 2025-03-23 14:35:50.793543 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-3-node-base 2025-03-23 14:35:50.835366 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-4-node-base 2025-03-23 14:35:50.875833 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-5-node-base 2025-03-23 14:35:50.913424 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-manager-base 2025-03-23 14:35:50.952849 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-5-node-5 2025-03-23 14:35:50.992331 | orchestrator | 2025-03-23 14:35:50 - testbed-volume-2-node-2 2025-03-23 14:35:51.033490 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-0-node-0 2025-03-23 14:35:51.076720 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-13-node-1 2025-03-23 14:35:51.115539 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-16-node-4 2025-03-23 14:35:51.155388 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-9-node-3 2025-03-23 14:35:51.197698 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-6-node-0 2025-03-23 14:35:51.235007 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-7-node-1 2025-03-23 14:35:51.279273 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-3-node-3 2025-03-23 14:35:51.320137 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-4-node-4 2025-03-23 14:35:51.360840 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-14-node-2 2025-03-23 14:35:51.399973 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-17-node-5 2025-03-23 14:35:51.442643 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-1-node-1 2025-03-23 14:35:51.484167 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-11-node-5 2025-03-23 14:35:51.523125 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-8-node-2 2025-03-23 14:35:51.563649 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-10-node-4 2025-03-23 14:35:51.601220 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-12-node-0 2025-03-23 14:35:51.643907 | orchestrator | 2025-03-23 14:35:51 - testbed-volume-15-node-3 2025-03-23 14:35:51.686673 | orchestrator | 2025-03-23 14:35:51 - disconnect routers 2025-03-23 14:35:51.744727 | orchestrator | 2025-03-23 14:35:51 - testbed 2025-03-23 14:35:52.372493 | orchestrator | 2025-03-23 14:35:52 - clean up subnets 2025-03-23 14:35:52.409021 | orchestrator | 2025-03-23 14:35:52 - subnet-testbed-management 2025-03-23 14:35:52.536849 | orchestrator | 2025-03-23 14:35:52 - clean up networks 2025-03-23 14:35:52.700949 | orchestrator | 2025-03-23 14:35:52 - net-testbed-management 2025-03-23 14:35:52.998542 | orchestrator | 2025-03-23 14:35:52 - clean up security groups 2025-03-23 14:35:53.034649 | orchestrator | 2025-03-23 14:35:53 - testbed-management 2025-03-23 14:35:53.131976 | orchestrator | 2025-03-23 14:35:53 - testbed-node 2025-03-23 14:35:53.232766 | orchestrator | 2025-03-23 14:35:53 - clean up floating ips 2025-03-23 14:35:53.263592 | orchestrator | 2025-03-23 14:35:53 - 81.163.193.177 2025-03-23 14:35:53.653625 | orchestrator | 2025-03-23 14:35:53 - clean up routers 2025-03-23 14:35:53.705690 | orchestrator | 2025-03-23 14:35:53 - testbed 2025-03-23 14:35:54.569235 | orchestrator | changed 2025-03-23 14:35:54.612963 | 2025-03-23 14:35:54.613062 | PLAY RECAP 2025-03-23 14:35:54.613116 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-23 14:35:54.613141 | 2025-03-23 14:35:54.731922 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-23 14:35:54.738500 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-23 14:35:55.517223 | 2025-03-23 14:35:55.517431 | PLAY [Base post-fetch] 2025-03-23 14:35:55.547172 | 2025-03-23 14:35:55.547312 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-23 14:35:55.634365 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:55.650978 | 2025-03-23 14:35:55.651181 | TASK [fetch-output : Set log path for single node] 2025-03-23 14:35:55.705673 | orchestrator | ok 2025-03-23 14:35:55.715999 | 2025-03-23 14:35:55.716118 | LOOP [fetch-output : Ensure local output dirs] 2025-03-23 14:35:56.185132 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/logs" 2025-03-23 14:35:56.477155 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/artifacts" 2025-03-23 14:35:56.736975 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/beeeea37af1a4630ae807f6527409ece/work/docs" 2025-03-23 14:35:56.760976 | 2025-03-23 14:35:56.761120 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-23 14:35:57.570210 | orchestrator | changed: .d..t...... ./ 2025-03-23 14:35:57.570620 | orchestrator | changed: All items complete 2025-03-23 14:35:57.570693 | 2025-03-23 14:35:58.170635 | orchestrator | changed: .d..t...... ./ 2025-03-23 14:35:58.742812 | orchestrator | changed: .d..t...... ./ 2025-03-23 14:35:58.781209 | 2025-03-23 14:35:58.781335 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-23 14:35:58.825236 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:58.832046 | orchestrator | skipping: Conditional result was False 2025-03-23 14:35:58.882061 | 2025-03-23 14:35:58.882150 | PLAY RECAP 2025-03-23 14:35:58.882203 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-23 14:35:58.882230 | 2025-03-23 14:35:58.991483 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-23 14:35:58.998061 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-23 14:35:59.694490 | 2025-03-23 14:35:59.694626 | PLAY [Base post] 2025-03-23 14:35:59.722764 | 2025-03-23 14:35:59.722884 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-23 14:36:00.500999 | orchestrator | changed 2025-03-23 14:36:00.537135 | 2025-03-23 14:36:00.537256 | PLAY RECAP 2025-03-23 14:36:00.537320 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-23 14:36:00.537430 | 2025-03-23 14:36:00.648048 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-23 14:36:00.651404 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-23 14:36:01.376206 | 2025-03-23 14:36:01.376359 | PLAY [Base post-logs] 2025-03-23 14:36:01.392539 | 2025-03-23 14:36:01.392665 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-23 14:36:01.836192 | localhost | changed 2025-03-23 14:36:01.845080 | 2025-03-23 14:36:01.845320 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-23 14:36:01.878703 | localhost | ok 2025-03-23 14:36:01.895996 | 2025-03-23 14:36:01.896132 | TASK [Set zuul-log-path fact] 2025-03-23 14:36:01.915077 | localhost | ok 2025-03-23 14:36:01.926840 | 2025-03-23 14:36:01.926959 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-23 14:36:01.961767 | localhost | ok 2025-03-23 14:36:01.971704 | 2025-03-23 14:36:01.971845 | TASK [upload-logs : Create log directories] 2025-03-23 14:36:02.510470 | localhost | changed 2025-03-23 14:36:02.517672 | 2025-03-23 14:36:02.517815 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-23 14:36:03.014561 | localhost -> localhost | ok: Runtime: 0:00:00.006967 2025-03-23 14:36:03.020914 | 2025-03-23 14:36:03.021037 | TASK [upload-logs : Upload logs to log server] 2025-03-23 14:36:03.584901 | localhost | Output suppressed because no_log was given 2025-03-23 14:36:03.591730 | 2025-03-23 14:36:03.591917 | LOOP [upload-logs : Compress console log and json output] 2025-03-23 14:36:03.655119 | localhost | skipping: Conditional result was False 2025-03-23 14:36:03.672093 | localhost | skipping: Conditional result was False 2025-03-23 14:36:03.687494 | 2025-03-23 14:36:03.687672 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-23 14:36:03.747874 | localhost | skipping: Conditional result was False 2025-03-23 14:36:03.748589 | 2025-03-23 14:36:03.760106 | localhost | skipping: Conditional result was False 2025-03-23 14:36:03.773045 | 2025-03-23 14:36:03.773243 | LOOP [upload-logs : Upload console log and json output]